summaryrefslogtreecommitdiff
path: root/src/content/en/tils
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2025-04-18 02:17:12 -0300
committerEuAndreh <eu@euandre.org>2025-04-18 02:48:42 -0300
commit020c1e77489b772f854bb3288b9c8d2818a6bf9d (patch)
tree142aec725a52162a446ea7d947cb4347c9d573c9 /src/content/en/tils
parentMakefile: Remove security.txt.gz (diff)
downloadeuandre.org-020c1e77489b772f854bb3288b9c8d2818a6bf9d.tar.gz
euandre.org-020c1e77489b772f854bb3288b9c8d2818a6bf9d.tar.xz
git mv src/content/* src/content/en/
Diffstat (limited to 'src/content/en/tils')
-rw-r--r--src/content/en/tils/2020/08/12/filename-timestamp.adoc31
-rw-r--r--src/content/en/tils/2020/08/13/code-jekyll.adoc159
-rw-r--r--src/content/en/tils/2020/08/14/browse-git.adoc76
-rw-r--r--src/content/en/tils/2020/08/16/git-search.adoc49
-rw-r--r--src/content/en/tils/2020/08/28/grep-online.adoc133
-rw-r--r--src/content/en/tils/2020/09/04/cli-email-fun-profit.adoc76
-rw-r--r--src/content/en/tils/2020/09/05/oldschool-pr.adoc110
-rw-r--r--src/content/en/tils/2020/10/11/search-git-history.adoc29
-rw-r--r--src/content/en/tils/2020/11/08/find-broken-symlink.adoc25
-rw-r--r--src/content/en/tils/2020/11/12/diy-nix-bash-ci.adoc63
-rw-r--r--src/content/en/tils/2020/11/12/git-bisect-automation.adoc25
-rw-r--r--src/content/en/tils/2020/11/12/useful-bashvars.adoc61
-rw-r--r--src/content/en/tils/2020/11/14/gpodder-media.adoc21
-rw-r--r--src/content/en/tils/2020/11/30/git-notes-ci.adoc112
-rw-r--r--src/content/en/tils/2020/12/15/shellcheck-repo.adoc165
-rw-r--r--src/content/en/tils/2020/12/29/svg.adoc125
-rw-r--r--src/content/en/tils/2021/01/12/curl-awk-emails.adoc148
-rw-r--r--src/content/en/tils/2021/01/17/posix-shebang.adoc58
-rw-r--r--src/content/en/tils/2021/04/24/cl-generic-precedence.adoc149
-rw-r--r--src/content/en/tils/2021/04/24/clojure-autocurry.adoc135
-rw-r--r--src/content/en/tils/2021/04/24/scm-nif.adoc61
-rw-r--r--src/content/en/tils/2021/07/23/git-tls-gpg.adoc45
-rw-r--r--src/content/en/tils/2021/08/11/js-bigint-reviver.adoc89
-rw-r--r--src/content/en/tils/categories.adoc1
-rw-r--r--src/content/en/tils/index.adoc7
25 files changed, 1953 insertions, 0 deletions
diff --git a/src/content/en/tils/2020/08/12/filename-timestamp.adoc b/src/content/en/tils/2020/08/12/filename-timestamp.adoc
new file mode 100644
index 0000000..aa8d63b
--- /dev/null
+++ b/src/content/en/tils/2020/08/12/filename-timestamp.adoc
@@ -0,0 +1,31 @@
+= Simple filename timestamp
+:categories: shell
+
+When writing Jekyll posts or creating log files with dates on them, I usually
+struggle with finding a direct way of accomplishing that. There's a simple
+solution: `date -I`.
+
+[source,sh]
+----
+./my-program.sh > my-program.$(date -I).log
+cp post-template.md _posts/$(date -I)-post-slug.md
+----
+
+Using this built-in GNU/Linux tool allows you to `touch $(date -I).md` to
+readily create a `2020-08-12.md` file.
+
+I always had to read `man date` or search the web over and over, and after doing
+this repeatedly it became clear that both `date -I` and `date -Is` (`s` here
+stands for seconds) are the thing that I'm looking for 95% of the time:
+
+[source,sh]
+----
+# inside my-program.sh
+echo "Program started at $(date -Is)"
+# output is:
+# Program started at 2020-08-12T09:04:58-03:00
+----
+
+Both date formats are hierarchical, having the bigger time intervals to the
+left. This means that you can easily sort them (and even tab-complete them)
+with no extra effort or tool required.
diff --git a/src/content/en/tils/2020/08/13/code-jekyll.adoc b/src/content/en/tils/2020/08/13/code-jekyll.adoc
new file mode 100644
index 0000000..6bd90b0
--- /dev/null
+++ b/src/content/en/tils/2020/08/13/code-jekyll.adoc
@@ -0,0 +1,159 @@
+= Anchor headers and code lines in Jekyll
+
+:empty:
+:jekyll: https://jekyllrb.com/
+:kramdown: https://kramdown.gettalong.org/
+:rouge: https://rouge.jneen.net/
+:jekyll-hook: https://jekyllrb.com/docs/plugins/hooks/
+
+The default Jekyll toolbox ({jekyll}[Jekyll], {kramdown}[kramdown] and
+{rouge}[rouge]) doesn't provide with a configuration option to add anchors to
+headers and code blocks.
+
+The best way I found of doing this is by creating a simple Jekyll plugin, more
+specifically, a {jekyll-hook}[Jekyll hook]. These allow you to jump in to the
+Jekyll build and add a processing stage before of after Jekyll performs
+something.
+
+All you have to do is add the code to `_plugins/my-jekyll-plugin-code.rb`, and
+Jekyll knows to pick it up and call your code on the appropriate time.
+
+== Anchor on headers
+
+:jemoji: https://github.com/jekyll/jemoji
+:jekyll-mentions: https://github.com/jekyll/jekyll-mentions
+:html-regex: https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
+
+Since I wanted to add anchors to headers in all documents, this Jekyll hook
+works on `:documents` after they have been transformed into HTML, the
+`:post_render` phase:
+
+[source,ruby]
+----
+Jekyll::Hooks.register :documents, :post_render do |doc|
+ if doc.output_ext == ".html"
+ doc.output =
+ doc.output.gsub(
+ /<h([1-6])(.*?)id="([\w-]+)"(.*?)>(.*?)<\/h[1-6]>/,
+ '<a href="#\3"><h\1\2id="\3"\4>\5</h\1></a>'
+ )
+ end
+end
+----
+
+I've derived my implementations from two
+"official"{empty}footnote:official[
+ I don't know how official they are, I just assumed it because they live in the
+ same organization inside GitHub that Jekyll does.
+] hooks, {jemoji}[jemoji] and {jekyll-mentions}[jekyll-mentions].
+
+All I did was to wrap the header tag inside an `<a>`, and set the `href` of that
+`<a>` to the existing id of the header. Before the hook the HTML looks like:
+
+[source,html]
+----
+...some unmodified text...
+<h2 id="my-header">
+ My header
+</h2>
+...more unmodified text...
+----
+
+And after the hook should turn that into:
+
+[source,html]
+----
+...some unmodified text...
+<a href="#my-header">
+ <h2 id="my-header">
+ My header
+ </h2>
+</a>
+...more unmodified text...
+----
+
+The used regexp tries to match only h1-h6 tags, and keep the rest of the HTML
+attributes untouched, since this isn't a general HTML parser, but the generated
+HTML is somewhat under your control. Use at your own risk because
+{html-regex}[you shouldn't parse HTML with regexps]. Also I used this strategy
+in my environment, where no other plugins are installed. I haven't considered
+how this approach may conflict with other Jekyll plugins.
+
+In the new anchor tag you can add your custom CSS class to style it as you wish.
+
+== Anchor on code blocks
+
+Adding anchors to code blocks needs a little bit of extra work, because line
+numbers themselves don't have preexisting ids, so we need to generate them
+without duplications between multiple code blocks in the same page.
+
+Similarly, this Jekyll hook also works on `:documents` in the `:post_render`
+phase:
+
+[source,ruby]
+----
+PREFIX = '<pre class="lineno">'
+POSTFIX = '</pre>'
+Jekyll::Hooks.register :documents, :post_render do |doc|
+ if doc.output_ext == ".html"
+ code_block_counter = 1
+ doc.output = doc.output.gsub(/<pre class="lineno">[\n0-9]+<\/pre>/) do |match|
+ line_numbers = match
+ .gsub(/<pre class="lineno">([\n0-9]+)<\/pre>/, '\1')
+ .split("\n")
+
+ anchored_line_numbers_array = line_numbers.map do |n|
+ id = "B#{code_block_counter}-L#{n}"
+ "<a id=\"#{id}\" href=\"##{id}\">#{n}</a>"
+ end
+ code_block_counter += 1
+
+ PREFIX + anchored_line_numbers_array.join("\n") + POSTFIX
+ end
+ end
+end
+----
+
+This solution assumes the default Jekyll toolbox with code line numbers turned
+on in `_config.yml`:
+
+[source,yaml]
+----
+kramdown:
+ syntax_highlighter_opts:
+ span:
+ line_numbers: false
+ block:
+ line_numbers: true
+----
+
+The anchors go from B1-L1 to BN-LN, using the `code_block_counter` to track
+which code block we're in and don't duplicate anchor ids. Before the hook the
+HTML looks like:
+
+[source,html]
+----
+...some unmodified text...
+<pre class="lineno">1
+2
+3
+4
+5
+</pre>
+...more unmodified text...
+----
+
+And after the hook should turn that into:
+
+[source,html]
+----
+...some unmodified text...
+<pre class="lineno"><a id="B1-L1" href="#B1-L1">1</a>
+<a id="B1-L2" href="#B1-L2">2</a>
+<a id="B1-L3" href="#B1-L3">3</a>
+<a id="B1-L4" href="#B1-L4">4</a>
+<a id="B1-L5" href="#B1-L5">5</a></pre>
+...more unmodified text...
+----
+
+Happy writing :)
diff --git a/src/content/en/tils/2020/08/14/browse-git.adoc b/src/content/en/tils/2020/08/14/browse-git.adoc
new file mode 100644
index 0000000..6b3ff6d
--- /dev/null
+++ b/src/content/en/tils/2020/08/14/browse-git.adoc
@@ -0,0 +1,76 @@
+= Browse a git repository at a specific commit
+:categories: git
+
+I commonly use tools like `git log` together with `git show` when inspecting
+past changes in a repository:
+
+[source,sh]
+----
+git log
+# search for a the commit I'm looking for
+git show <my-commit>
+# see the diff for the commit
+----
+
+But I also wanted to not only be able to look at the diff of a specific commit,
+but to browse the whole repository at that specific commit.
+
+I used to accomplish it the "brute force" way: clone the whole repository in
+another folder and checkout the commit there:
+
+[source,sh]
+----
+git clone <original-repo> /tmp/tmp-repo-clone
+cd /tmp-repo-clone
+git checkout <my-commit>
+----
+
+But git itself allows we to specific the directory of the checkout by using the
+`--work-tree` global git flag. This is what `man git` says about it:
+
+[source,text]
+----
+--work-tree=<path>
+ Set the path to the working tree. It can be an absolute path or a path relative to the current working
+ directory. This can also be controlled by setting the GIT_WORK_TREE environment variable and the
+ core.worktree configuration variable (see core.worktree in git-config(1) for a more detailed
+ discussion).
+----
+
+So it allows us to set the desired path of the working tree. So if we want to
+copy the contents of the current working tree into `copy/`:
+
+[source,sh]
+----
+mkdir copy
+git --work-tree=copy/ checkout .
+----
+
+After that `copy/` will contain a replica of the code in HEAD. But to checkout
+a specific, we need some extra parameters:
+
+[source,sh]
+----
+git --work-tree=<dir> checkout <my-commit> -- .
+----
+
+There's an extra `-- .` at the end, which initially looks like we're sending
+Morse signals to git, but we're actually saying to `git-checkout` which sub
+directory of `<my-commit>` we want to look at. Which means we can do something
+like:
+
+[source,sh]
+----
+git --work-tree=<dir> checkout <my-commit> -- src/
+----
+
+And with that `<dir>` will only contain what was inside `src/` at `<commit>`.
+
+After any of those checkouts, you have to `git reset .` to reset your current
+staging area back to what it was before the checkout.
+
+== References
+
+:so-link: https://stackoverflow.com/a/16493707
+
+. {so-link}[GIT: Checkout to a specific folder] (StackOverflow)
diff --git a/src/content/en/tils/2020/08/16/git-search.adoc b/src/content/en/tils/2020/08/16/git-search.adoc
new file mode 100644
index 0000000..4113f3f
--- /dev/null
+++ b/src/content/en/tils/2020/08/16/git-search.adoc
@@ -0,0 +1,49 @@
+= Search in git
+:categories: git
+
+Here's a useful trio to know about to help you search things in git:
+
+. `git show <commit>`
+. `git log --grep='<regexp>'`
+. `git grep '<regexp>' [commit]`
+
+== 1. `git show <commit>`
+
+Show a specific commit and it's diff:
+
+[source,sh]
+----
+git show
+# shows the latest commit
+git show <commit>
+# shows an specific <commit>
+git show v1.2
+# shows commit tagged with v1.2
+----
+
+== 2. `git log --grep='<regexp>'`
+
+Search through the commit messages:
+
+[source,sh]
+----
+git log --grep='refactor'
+----
+
+== 3. `git grep '<regexp>' [commit]`
+
+:browse-article: link:../14/browse-git.html
+
+Search content in git history:
+
+[source,sh]
+----
+git grep 'TODO'
+# search the repository for the "TODO" string
+git grep 'TODO' $(git rev-list --all)
+# search the whole history for "TODO" string
+----
+
+And if you find an occurrence of the regexp in a specific commit and you want to
+browse the repository in that point in time, you can {browse-article}[use git
+checkout for that].
diff --git a/src/content/en/tils/2020/08/28/grep-online.adoc b/src/content/en/tils/2020/08/28/grep-online.adoc
new file mode 100644
index 0000000..77363ab
--- /dev/null
+++ b/src/content/en/tils/2020/08/28/grep-online.adoc
@@ -0,0 +1,133 @@
+= Grep online repositories
+:categories: git
+
+:cgit: https://git.zx2c4.com/cgit/
+
+I often find interesting source code repositories online that I want to grep for
+some pattern but I can't, because either:
+
+* the repository is on {cgit}[cgit] or a similar code repository that doesn't
+ allow search in files, or;
+* the search function is really bad, and doesn't allow me to use regular
+ expressions for searching patterns in the code.
+
+Here's a simple script that allows you to overcome that problem easily:
+
+[source,sh]
+----
+#!/usr/bin/env bash
+set -eu
+
+end="\033[0m"
+red="\033[0;31m"
+red() { echo -e "${red}${1}${end}"; }
+
+usage() {
+ red "Missing argument $1.\n"
+ cat <<EOF
+Usage:
+ $0 <REGEX_PATTERN> <REPOSITORY_URL>
+
+ Arguments:
+ REGEX_PATTERN Regular expression that "git grep" can search
+ REPOSITORY_URL URL address that "git clone" can download the repository from
+
+Examples:
+ Searching "make get-git" in cgit repository:
+ git search 'make get-git' https://git.zx2c4.com/cgit/
+ git search 'make get-git' https://git.zx2c4.com/cgit/ -- \$(git rev-list --all)
+EOF
+ exit 2
+}
+
+
+REGEX_PATTERN="${1:-}"
+REPOSITORY_URL="${2:-}"
+[[ -z "${REGEX_PATTERN}" ]] && usage 'REGEX_PATTERN'
+[[ -z "${REPOSITORY_URL}" ]] && usage 'REPOSITORY_URL'
+
+mkdir -p /tmp/git-search
+DIRNAME="$(echo "${REPOSITORY_URL%/}" | rev | cut -d/ -f1 | rev)"
+if [[ ! -d "/tmp/git-search/${DIRNAME}" ]]; then
+ git clone "${REPOSITORY_URL}" "/tmp/git-search/${DIRNAME}"
+fi
+pushd "/tmp/git-search/${DIRNAME}"
+
+shift 3 || shift 2 # when "--" is missing
+git grep "${REGEX_PATTERN}" "${@}"
+----
+
+It is a wrapper around `git grep` that downloads the repository when missing.
+Save in a file called `git-search`, make the file executable and add it to your
+path.
+
+Overview:
+
+* _lines 1~2_:
++
+Bash shebang and the `set -eu` options to exit on error or undefined
+variables.
+
+* _lines 4~30_:
++
+Usage text to be printed when providing less arguments than expected.
+
+* _line 33_:
++
+Extract the repository name from the URL, removing trailing slashes.
+
+* _lines 34~37_:
++
+Download the repository when missing and go to the folder.
+
+* _line 39_:
++
+Make the variable `$@` contain the rest of the unused arguments.
+
+* _line 40_:
++
+Perform `git grep`, forwarding the remaining arguments from `$@`.
+
+Example output:
+
+[source,sh]
+----
+$ git search 'make get-git' https://git.zx2c4.com/cgit/
+Clonage dans '/tmp/git-search/cgit'...
+remote: Enumerating objects: 542, done.
+remote: Counting objects: 100% (542/542), done.
+remote: Compressing objects: 100% (101/101), done.
+warning: object 51dd1eff1edc663674df9ab85d2786a40f7ae3a5: gitmodulesParse: could not parse gitmodules blob
+remote: Total 7063 (delta 496), reused 446 (delta 441), pack-reused 6521
+Réception d'objets: 100% (7063/7063), 8.69 Mio | 5.39 Mio/s, fait.
+Résolution des deltas: 100% (5047/5047), fait.
+/tmp/git-search/cgit ~/dev/libre/songbooks/docs
+README: $ make get-git
+
+$ git search 'make get-git' https://git.zx2c4.com/cgit/
+/tmp/git-search/cgit ~/dev/libre/songbooks/docs
+README: $ make get-git
+----
+
+Subsequent greps on the same repository are faster because no download is
+needed.
+
+When no argument is provided, it prints the usage text:
+
+[source,sh]
+----
+$ git search
+Missing argument REGEX_PATTERN.
+
+Usage:
+ /home/andreh/dev/libre/dotfiles/scripts/ad-hoc/git-search <REGEX_PATTERN> <REPOSITORY_URL>
+
+ Arguments:
+ REGEX_PATTERN Regular expression that "git grep" can search
+ REPOSITORY_URL URL address that "git clone" can download the repository from
+
+Examples:
+ Searching "make get-git" in cgit repository:
+ git search 'make get-git' https://git.zx2c4.com/cgit/
+ git search 'make get-git' https://git.zx2c4.com/cgit/ -- $(git rev-list --all)
+----
diff --git a/src/content/en/tils/2020/09/04/cli-email-fun-profit.adoc b/src/content/en/tils/2020/09/04/cli-email-fun-profit.adoc
new file mode 100644
index 0000000..1da1154
--- /dev/null
+++ b/src/content/en/tils/2020/09/04/cli-email-fun-profit.adoc
@@ -0,0 +1,76 @@
+= Send emails using the command line for fun and profit!
+
+:ssmtp: https://wiki.archlinux.org/index.php/SSMTP
+:mailutils: https://mailutils.org/
+
+Here are a few reasons why:
+
+. send yourself and other people notification of cronjobs, scripts runs, CI
+ jobs, _etc._
+. leverage the POSIX pipe `|`, and pipe emails away!
+. because you can.
+
+Reason 3 is the fun part, reasons 1 and 2 are the profit part.
+
+First {ssmpt}[install and configure SSMTP] for using, say, Gmail as the email
+server:
+
+[source,sh]
+----
+# file /etc/ssmtp/ssmtp.conf
+FromLineOverride=YES
+MailHub=smtp.gmail.com:587
+UseSTARTTLS=YES
+UseTLS=YES
+rewriteDomain=gmail.com
+root=username@gmail.com
+AuthUser=username
+AuthPass=password
+----
+
+Now install {mailutils}[GNU Mailutils] (`sudo apt-get install mailutils` or the
+equivalent on your OS), and send yourself your first email:
+
+[source,sh]
+----
+echo body | mail -aFrom:email@example.com email@example.com -s subject
+----
+
+And that's about it, you've got mail. Here are some more places where it might
+be applicable:
+
+[source,sh]
+----
+# report a backup cronjob, attaching logs
+set -e
+
+finish() {
+ status=$?
+ if [[ $status = 0 ]]; then
+ STATUS="SUCCESS (status $status)"
+ else
+ STATUS="FAILURE (status $status)"
+ fi
+
+ mail user@example.com \
+ -s "Backup job report on $(hostname): ${STATUS}" \
+ --content-type 'text/plain; charset=utf-8' \
+ -A"$LOG_FILE" <<< 'The log report is in the attachment.'
+}
+trap finish EXIT
+
+do-long-backup-cmd-here
+----
+
+[source,sh]
+----
+# share the output of a cmd with someone
+some-program | mail someone@example.com -s "The weird logs that I was talking about"
+----
+
+...and so on.
+
+You may consider adding a `alias mail='mail -aFrom:email@example.com'` so you
+don't keep re-entering the ``From:'' part.
+
+Send yourself some emails to see it working!
diff --git a/src/content/en/tils/2020/09/05/oldschool-pr.adoc b/src/content/en/tils/2020/09/05/oldschool-pr.adoc
new file mode 100644
index 0000000..392ec67
--- /dev/null
+++ b/src/content/en/tils/2020/09/05/oldschool-pr.adoc
@@ -0,0 +1,110 @@
+= Pull requests with Git, the old school way
+:categories: git
+
+:empty:
+:cgit: https://git.zx2c4.com/cgit/
+
+It might be news to you, as it was to me, that "pull requests" that you can
+create on a Git hosting provider’s web
+UI{empty}footnote:pr-webui[
+ And maybe even using the Git hosting provider’s API from the command line!
+] like GitLab/Bitbucket/GitHub actually comes from Git itself:
+`git request-pull`.
+
+At the very core, they accomplish the same thing: both the original and the web
+UI ones are ways for you to request the project maintainers to pull in your
+changes from your fork. It’s like saying: "hi there, I did some changes on my
+clone of the repository, what do you think about bringing those in?".
+
+The only difference is that you’re working with only Git itself, so you’re not
+tied to any Git hosting provider: you can send pull requests across them
+transparently! You could even use your own {cgit}[cgit] installation. No need
+to be locked in by any of them, putting the "D" back in "DVCS": it’s a
+*distributed* version control system.
+
+== `git request-pull` introduction
+
+Here’s the raw output of a `git request-pull`:
+
+[source,sh]
+----
+$ git request-pull HEAD public-origin
+The following changes since commit 302c9f2f035c0360acd4e13142428c100a10d43f:
+
+ db post: Add link to email exchange (2020-09-03 21:23:55 -0300)
+
+are available in the Git repository at:
+
+ https://euandre.org/git/euandre.org/
+
+for you to fetch changes up to 524c646cdac4153e54f2163e280176adbc4873fa:
+
+ db post: better pinpoint sqlite unsuitability (2020-09-03 22:08:56 -0300)
+
+----------------------------------------------------------------
+EuAndreh (1):
+ db post: better pinpoint sqlite unsuitability
+
+ _posts/2020-08-31-the-database-i-wish-i-had.md | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+----
+
+That very first line is saying: "create me a pull request with only a single
+commit, defined by `HEAD`, and use the URL defined by `public-origin`".
+
+Here’s a pitfall: you may try using your `origin` remote at first where I put
+`public-origin`, but that is many times pointing to something like
+`git@example.com`, or `git.example.com:repo.git` (check that with
+`git remote -v | grep origin`). On both cases those are addresses available for
+interaction via SSH, and it would be better if your pull requests used an
+address ready for public consumption.
+
+A simple solution for that is for you to add the `public-origin` alias as the
+HTTPS alternative to the SSH version:
+
+[source,sh]
+----
+$ git remote add public-origin https://example.com/user/repo
+----
+
+Every Git hosting provider exposes repositories via HTTPS.
+
+Experiment it yourself, and get acquainted with the CLI.
+
+== Delivering decentralized pull requests
+
+:cli-email: link:../04/cli-email-fun-profit.html
+
+Now that you can create the content of a pull request, you can just
+{cli-email}[deliver it] to the interested parties email:
+
+[source,sh]
+----
+# send a PR with your last commit to the author's email
+git request-pull HEAD public-origin | mail author@example.com -s "PR: Add thing to repo"
+
+# send a PR with your last 5 commits to the project's mailing
+# list, including the patch
+git request-pull -p HEAD~5 public-origin | \
+ mail list@example.com -s "PR: Add another thing to repo"
+
+# send every commit that is new in "other-branch"
+git request-pull master public-origin other-branch | \
+ mail list@example.com -s 'PR: All commits from my "other-brach"'
+----
+
+== Conclusion
+
+:distgit: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
+
+In practice, I’ve never used or seen anyone use pull requests this way:
+everybody is just {distgit}[sending patches via email].
+
+If you stop to think about this model, the problem of "Git hosting providers
+becoming too centralized" is a non-issue, and "Git federation" proposals are a
+less attractive as they may sound initially.
+
+Using Git this way is not scary or so weird as the first impression may suggest.
+It is actually how Git was designed to be used.
+
+Check `git help request-pull` for more info.
diff --git a/src/content/en/tils/2020/10/11/search-git-history.adoc b/src/content/en/tils/2020/10/11/search-git-history.adoc
new file mode 100644
index 0000000..696368c
--- /dev/null
+++ b/src/content/en/tils/2020/10/11/search-git-history.adoc
@@ -0,0 +1,29 @@
+= Search changes to a filename pattern in Git history
+:categories: git
+
+:yet: link:../../08/14/browse-git.html
+:another: link:../../08/16/git-search.html
+:entry: link:../../08/28/grep-online.html
+
+This is {yet}[yet] {another}[another] {entry}["search in Git"] TIL entry. You
+could say that Git has a unintuitive CLI, or that is it very powerful.
+
+I wanted to search for an old file that I new that was in the history of the
+repository, but was deleted some time ago. So I didn't really remember the
+name, only bits of it.
+
+I immediately went to the list of TILs I had written on searching in Git, but it
+wasn't readily obvious how to do it, so here it goes:
+
+[source,sh]
+----
+git log -- *pattern*
+----
+
+You could add globs before the pattern to match things on any directory, and add
+our `-p` friend to promptly see the diffs:
+
+[source,sh]
+----
+git log -p -- **/*pattern*
+----
diff --git a/src/content/en/tils/2020/11/08/find-broken-symlink.adoc b/src/content/en/tils/2020/11/08/find-broken-symlink.adoc
new file mode 100644
index 0000000..624d24a
--- /dev/null
+++ b/src/content/en/tils/2020/11/08/find-broken-symlink.adoc
@@ -0,0 +1,25 @@
+= Find broken symlinks with "find"
+:categories: shell
+
+:annex: https://git-annex.branchable.com/
+:annex-wanted: https://git-annex.branchable.com/git-annex-wanted/
+
+The `find` command knows how to show broken symlinks:
+
+[source,sh]
+----
+find . -xtype l
+----
+
+This was useful to me when combined with {annex}[Git Annex]. Its
+{annex-wanted}[`wanted`] option allows you to have a "sparse" checkout of the
+content, and save space by not having to copy every annexed file locally:
+
+[source,sh]
+----
+git annex wanted . 'exclude=Music/* and exclude=Videos/*'
+----
+
+You can `find` any broken symlinks outside those directories by querying with
+Git Annex itself, but `find . -xtype l` works on other places too, where broken
+symlinks might be a problem.
diff --git a/src/content/en/tils/2020/11/12/diy-nix-bash-ci.adoc b/src/content/en/tils/2020/11/12/diy-nix-bash-ci.adoc
new file mode 100644
index 0000000..97ace30
--- /dev/null
+++ b/src/content/en/tils/2020/11/12/diy-nix-bash-ci.adoc
@@ -0,0 +1,63 @@
+= DIY bare bones CI server with Bash and Nix
+:categories: ci
+:sort: 2
+
+:post-receive: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
+:example-project: https://euandreh.xyz/remembering/ci.html
+
+With a server with Nix installed (no need for NixOS), you can leverage its build
+isolation for running CI jobs by adding a {post-receive}[post-receive] Git hook
+to the server.
+
+In most of my project I like to keep a `test` attribute which runs the test with
+`nix-build -A test`. This way, a post-receive hook could look like:
+
+[source,sh]
+----
+#!/usr/bin/env bash
+set -Eeuo pipefail
+set -x
+
+LOGS_DIR="/data/static/ci-logs/libedn"
+mkdir -p "$LOGS_DIR"
+LOGFILE="${LOGS_DIR}/$(date -Is)-$(git rev-parse master).log"
+exec &> >(tee -a "${LOGFILE}")
+
+unset GIT_DIR
+CLONE="$(mktemp -d)"
+git clone . "$CLONE"
+pushd "$CLONE"
+
+finish() {
+ printf "\n\n>>> exit status was %s\n" "$?"
+}
+trap finish EXIT
+
+nix-build -A test
+----
+
+We initially (lines #5 to #8) create a log file, named after _when_ the run is
+running and for _which_ commit it is running for. The `exec` and `tee` combo
+allows the output of the script to go both to `stdout` _and_ the log file. This
+makes the logs output show up when you do a `git push`.
+
+Lines #10 to #13 create a fresh clone of the repository and line #20 runs the
+test command.
+
+After using a similar post-receive hook for a while, I now even generate a
+simple HTML file to make the logs available ({example-project}[example project])
+through the browser.
+
+== Upsides
+
+No vendor lock-in, as all you need is a server with Nix installed.
+
+And if you pin the Nixpkgs version you're using, this very simple setup yields
+extremely sandboxed runs on a very hermetic environment.
+
+== Downsides
+
+Besides the many missing shiny features of this very simplistic CI, `nix-build`
+can be very resource intensive. Specifically, it consumes too much memory. So
+if it has to download too many things, or the build closure gets too big, the
+server might very well run out of memory.
diff --git a/src/content/en/tils/2020/11/12/git-bisect-automation.adoc b/src/content/en/tils/2020/11/12/git-bisect-automation.adoc
new file mode 100644
index 0000000..dff8737
--- /dev/null
+++ b/src/content/en/tils/2020/11/12/git-bisect-automation.adoc
@@ -0,0 +1,25 @@
+= Git bisect automation
+:categories: git
+:sort: 1
+
+It is good to have an standardized way to run builds and tests on the repository
+of a project, so that you can find when a bug was introduced by using
+`git bisect run`.
+
+I've already been in the situation when a bug was introduced and I didn't know
+how it even was occurring, and running Git bisect over hundreds of commits to
+pinpoint the failing commit was very empowering:
+
+[source,sh]
+----
+$ GOOD_COMMIT_SHA=e1fd0a817d192c5a5df72dd7422e36558fa78e46
+$ git bisect start HEAD $GOOD_COMMIT_SHA
+$ git bisect run sn -c './build.sh && ./run-failing-case.sh'
+----
+
+Git will than do a binary search between the commits, and run the commands you
+provide it with to find the failing commit.
+
+Instead of being afraid of doing a bisect, you should instead leverage it, and
+make Git help you dig through the history of the repository to find the bad
+code.
diff --git a/src/content/en/tils/2020/11/12/useful-bashvars.adoc b/src/content/en/tils/2020/11/12/useful-bashvars.adoc
new file mode 100644
index 0000000..fb148fb
--- /dev/null
+++ b/src/content/en/tils/2020/11/12/useful-bashvars.adoc
@@ -0,0 +1,61 @@
+= Useful Bash variables
+:categories: shell
+
+:bash: https://www.gnu.org/software/bash/
+:bash-bang-bang: https://www.gnu.org/software/bash/manual/bash.html#Event-Designators
+:bash-dollar-underscore: https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters
+
+{bash}[GNU Bash] has a few two letter variables that may be useful when typing
+on the terminal.
+
+== `!!`: the text of the last command
+
+The {bash-bang-bang}[`!!` variable] refers to the previous command, and I find
+useful when following chains for symlinks:
+
+[source,sh]
+----
+$ which git
+/run/current-system/sw/bin/git
+$ readlink $(!!)
+readlink $(which git)
+/nix/store/5bgr1xpm4m0r72h9049jbbhagxdyrnyb-git-2.28.0/bin/git
+----
+
+It is also useful when you forget to prefix `sudo` to a command that requires
+it:
+
+[source,sh]
+----
+$ requires-sudo.sh
+requires-sudo.sh: Permission denied
+$ sudo !!
+sudo ./requires-sudo.sh
+# all good
+----
+
+Bash prints the command expansion before executing it, so it is better for you
+to follow along what it is doing.
+
+== `$_`: most recent parameter
+
+The {bash-dollar-underscore}[`$_` variable] will give you the most recent
+parameter you provided to a previous argument, which can save you typing
+sometimes:
+
+[source,sh]
+----
+# instead of...
+$ mkdir -p a/b/c/d/
+$ cd a/b/c/d/
+
+# ...you can:
+$ mkdir -p a/b/c/d/
+$ cd $_
+----
+
+== Conclusion
+
+I wouldn't use those in a script, as it would make the script terser to read, I
+find those useful shortcut that are handy when writing at the interactive
+terminal.
diff --git a/src/content/en/tils/2020/11/14/gpodder-media.adoc b/src/content/en/tils/2020/11/14/gpodder-media.adoc
new file mode 100644
index 0000000..f722f35
--- /dev/null
+++ b/src/content/en/tils/2020/11/14/gpodder-media.adoc
@@ -0,0 +1,21 @@
+= gPodder as a media subscription manager
+
+:re-discover: https://www.charlieharrington.com/unexpected-useless-and-urgent
+:gpodder: https://gpodder.github.io/
+
+As we {re-discover}[re-discover] the value of Atom/RSS feeds, most useful feed
+clients I know of don't support media, specifically audio and video.
+
+{gpodder}[gPodder] does.
+
+It is mostly know as a desktop podcatcher. But the thing about podcasts is that
+the feed is provided through an RSS/Atom feed. So you can just use gPodder as
+your media feed client, where you have control of what you look at.
+
+I audio and video providers I know of offer an RSS/Atom view of their content,
+so you can, say, treat any YouTube channel like a feed on its own.
+
+gPodder will then managed your feeds, watched/unwatched, queue downloads, etc.
+
+Being obvious now, it was a big finding for me. If it got you interested, I
+recommend you giving gPodder a try.
diff --git a/src/content/en/tils/2020/11/30/git-notes-ci.adoc b/src/content/en/tils/2020/11/30/git-notes-ci.adoc
new file mode 100644
index 0000000..48a996b
--- /dev/null
+++ b/src/content/en/tils/2020/11/30/git-notes-ci.adoc
@@ -0,0 +1,112 @@
+= Storing CI data on Git notes
+:categories: git ci
+
+:git-notes: https://git-scm.com/docs/git-notes
+:nix-bash-ci: link:../12/diy-nix-bash-ci.html
+:cgit: https://git.zx2c4.com/cgit/
+
+Extending the bare bones CI server I've {nix-bash-ci}[talked about before],
+divoplade on Freenode suggested storing CI artifacts on {git-notes}[Git notes],
+such as tarballs, binaries, logs, _etc_.
+
+I've written a small script that will put log files and CI job data on Git
+notes, and make it visible on the porcelain log. It is a simple extension of
+the previous article:
+
+[source,sh]
+----
+#!/usr/bin/env bash
+set -Eeuo pipefail
+set -x
+
+PREFIX='/srv/ci/vps'
+mkdir -p "$PREFIX"
+read -r _ SHA _ # oldrev newrev refname
+FILENAME="$(date -Is)-$SHA.log"
+LOGFILE="$PREFIX/$FILENAME"
+exec &> >(tee -a "$LOGFILE")
+
+echo "Starting CI job at: $(date -Is)"
+
+finish() {
+ STATUS="$?"
+ printf "\n\n>>> exit status was %s\n" "$STATUS"
+ echo "Finishing CI job at: $(date -Is)"
+ popd
+ NOTE=$(cat <<EOF
+See CI logs with:
+ git notes --ref=refs/notes/ci-logs show $SHA
+ git notes --ref=refs/notes/ci-data show $SHA
+EOF
+)
+ git notes --ref=refs/notes/ci-data add -f -m "$STATUS $FILENAME"
+ git notes --ref=refs/notes/ci-logs add -f -F "$LOGFILE"
+ git notes add -f -m "$NOTE"
+ printf "\n\n>>> CI logs added as Git note."
+}
+trap finish EXIT
+
+unset GIT_DIR
+CLONE="$(mktemp -d)"
+git clone . "$CLONE"
+pushd "$CLONE"
+git config --global user.email git@euandre.org
+git config --global user.name 'EuAndreh CI'
+
+./container make check site
+./container make publish
+----
+
+The important part is in the `finish()` function: - #25 stores the exit status
+and the generated filename separated by spaces; - #26 adds the log file in a
+note using the `refs/notes/ci-logs` ref; - #27 it adds a note to the commit
+saying how to see the logs.
+
+A commit now has an attached note, and shows it whenever you look at it:
+
+[source,diff]
+----
+$ git show 87c57133abd8be5d7cc46afbf107f59b26066575
+commit 87c57133abd8be5d7cc46afbf107f59b26066575
+Author: EuAndreh <eu@euandre.org>
+Date: Wed Feb 24 21:58:28 2021 -0300
+
+ vps/machines.scm: Change path to cronjob files
+
+Notes:
+ See CI logs with:
+ git notes --ref=refs/notes/ci-logs show 87c57133abd8be5d7cc46afbf107f59b26066575
+ git notes --ref=refs/notes/ci-data show 87c57133abd8be5d7cc46afbf107f59b26066575
+
+diff --git a/servers/vps/machines.scm b/servers/vps/machines.scm
+index d1830ca..a4ccde7 100644
+--- a/servers/vps/machines.scm
++++ b/servers/vps/machines.scm
+@@ -262,8 +262,8 @@ pki " mail-domain " key \"" (tls-priv-for mail-domain) "\""))
+ (service mcron-service-type
+ (mcron-configuration
+ (jobs
+- (list #~(job "30 1 * * 1" "guix gc -d")
+- #~(job "30 0 * * *" "/var/lib/euandreh/backup.sh")))))
++ (list #~(job "30 1 * * 1" "/opt/bin/gc.sh")
++ #~(job "30 0 * * *" "/opt/bin/backup.sh")))))
+ (service dhcp-client-service-type)
+ #;
+ (service opensmtpd-service-type
+----
+
+Other tools such as {cgit}[cgit] will also show notes on the web interface:
+https://euandre.org/git/servers/commit?id=87c57133abd8be5d7cc46afbf107f59b26066575.
+
+You can go even further: since cgit can serve raw blob directly, you can even
+serve such artifacts (log files, release artifacts, binaries) from cgit itself:
+
+[source,sh]
+----
+$ SHA="$(git notes --ref=refs/notes/ci-logs list 87c57133abd8be5d7cc46afbf107f59b26066575)"
+$ echo "https://euandre.org/git/servers/blob?id=$SHA"
+https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c
+----
+
+And like that you'll have cgit serving the artifacts for you:
+https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c.
diff --git a/src/content/en/tils/2020/12/15/shellcheck-repo.adoc b/src/content/en/tils/2020/12/15/shellcheck-repo.adoc
new file mode 100644
index 0000000..387e793
--- /dev/null
+++ b/src/content/en/tils/2020/12/15/shellcheck-repo.adoc
@@ -0,0 +1,165 @@
+= Awk snippet: ShellCheck all scripts in a repository
+:categories: shell
+:updatedat: 2020-12-16
+
+:awk-20-min: https://ferd.ca/awk-in-20-minutes.html
+:shellcheck: https://www.shellcheck.net/
+
+Inspired by Fred Herbert's "{awk-20-min}[Awk in 20 Minutes]", here's a problem I
+just solved with a line of Awk: run ShellCheck in all scripts of a repository.
+
+In my repositories I usually have Bash and POSIX scripts, which I want to keep
+tidy with {shellcheck}[ShellCheck]. Here's the first version of
+`assert-shellcheck.sh`:
+
+[source,sh]
+----
+#!/bin/sh -eux
+
+find . -type f -name '*.sh' -print0 | xargs -0 shellcheck
+----
+
+This is the type of script that I copy around to all repositories, and I want it
+to be capable of working on any repository, without requiring a list of files to
+run ShellCheck on.
+
+This first version worked fine, as all my scripts had the `.sh' ending. But I
+recently added some scripts without any extension, so `assert-shellcheck.sh`
+called for a second version. The first attempt was to try grepping the shebang
+line:
+
+[source,sh]
+----
+$ grep '^#!/' assert-shellcheck.sh
+#!/usr/sh
+----
+
+Good, we have a grep pattern on the first try. Let's try to find all the
+matching files:
+
+[source,sh]
+----
+$ find . -type f | xargs grep -l '^#!/'
+./TODOs.org
+./.git/hooks/pre-commit.sample
+./.git/hooks/pre-push.sample
+./.git/hooks/pre-merge-commit.sample
+./.git/hooks/fsmonitor-watchman.sample
+./.git/hooks/pre-applypatch.sample
+./.git/hooks/pre-push
+./.git/hooks/prepare-commit-msg.sample
+./.git/hooks/commit-msg.sample
+./.git/hooks/post-update.sample
+./.git/hooks/pre-receive.sample
+./.git/hooks/applypatch-msg.sample
+./.git/hooks/pre-rebase.sample
+./.git/hooks/update.sample
+./build-aux/with-guile-env.in
+./build-aux/test-driver
+./build-aux/missing
+./build-aux/install-sh
+./build-aux/install-sh~
+./bootstrap
+./scripts/assert-todos.sh
+./scripts/songbooks
+./scripts/compile-readme.sh
+./scripts/ci-build.sh
+./scripts/generate-tasks-and-bugs.sh
+./scripts/songbooks.in
+./scripts/with-container.sh
+./scripts/assert-shellcheck.sh
+----
+
+This approach has a problem, though: it includes files ignored by Git, such as
+`builld-aux/install-sh~`, and even goes into the `.git/` directory and finds
+sample hooks in `.git/hooks/*`.
+
+To list the files that Git is tracking we'll try `git ls-files`:
+
+[source,sh]
+----
+$ git ls-files | xargs grep -l '^#!/'
+TODOs.org
+bootstrap
+build-aux/with-guile-env.in
+old/scripts/assert-docs-spelling.sh
+old/scripts/build-site.sh
+old/scripts/builder.bats.sh
+scripts/assert-shellcheck.sh
+scripts/assert-todos.sh
+scripts/ci-build.sh
+scripts/compile-readme.sh
+scripts/generate-tasks-and-bugs.sh
+scripts/songbooks.in
+scripts/with-container.sh
+----
+
+It looks to be almost there, but the `TODOs.org` entry shows a flaw in it: grep
+is looking for a +'^#!/'+ pattern on any part of the file. In my case,
+`TODOs.org` had a snippet in the middle of the file where a line started with
++#!/bin/sh+.
+
+So what we actually want is to match the *first* line against the pattern. We
+could loop through each file, get the first line with `head -n 1` and grep
+against that, but this is starting to look messy. I bet there is another way of
+doing it concisely...
+
+Let's try Awk. I need a way to select the line numbers to replace `head -n 1`,
+and to stop processing the file if the pattern matches. A quick search points
+me to using `FNR` for the former, and `{ nextline }` for the latter. Let's try
+it:
+
+[source,sh]
+----
+$ git ls-files | xargs awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }'
+bootstrap
+build-aux/with-guile-env.in
+old/scripts/assert-docs-spelling.sh
+old/scripts/build-site.sh
+old/scripts/builder.bats.sh
+scripts/assert-shellcheck.sh
+scripts/assert-todos.sh
+scripts/ci-build.sh
+scripts/compile-readme.sh
+scripts/generate-tasks-and-bugs.sh
+scripts/songbooks.in
+scripts/with-container.sh
+----
+
+Great! Only `TODOs.org` is missing, but the script is much better: instead of
+matching against any part of the file that may have a shebang-like line, we only
+look for the first. Let's put it back into the `assert-shellcheck.sh` file and
+use `NULL` for separators to accommodate files with spaces in the name:
+
+[source,sh]
+----
+#!/usr/sh -eux
+
+git ls-files -z | \
+ xargs -0 awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }' | \
+ xargs shellcheck
+----
+
+This is where I've stopped, but I imagine a likely improvement: match against
+only +#!/bin/sh+ and +#!/usr/bin/env bash+ shebangs (the ones I use most), to
+avoid running ShellCheck on Perl files, or other shebangs.
+
+Also when reviewing the text of this article, I found that `{ nextfile }` is a
+GNU Awk extension. It would be an improvement if `assert-shellcheck.sh` relied
+on the POSIX subset of Awk for working correctly.
+
+== _Update_
+
+After publishing, I could remove `{ nextfile }` and even make the script
+simpler:
+
+[source,sh]
+----
+#!/usr/sh -eux
+
+git ls-files -z | \
+ xargs -0 awk 'FNR==1 && /^#!\// { print FILENAME }' | \
+ xargs shellcheck
+----
+
+Now both the shell and Awk usage are POSIX compatible.
diff --git a/src/content/en/tils/2020/12/29/svg.adoc b/src/content/en/tils/2020/12/29/svg.adoc
new file mode 100644
index 0000000..0e5dec3
--- /dev/null
+++ b/src/content/en/tils/2020/12/29/svg.adoc
@@ -0,0 +1,125 @@
+= SVG favicon
+:updatedat: 2021-01-12
+
+:favicon: link:../../../../img/favicon.svg
+
+I've wanted to change this website's favicon from a plain `.ico` file to a
+proper SVG. The problem I was trying to solve was to reuse the same image on
+other places, such as avatars.
+
+Generating a PNG from the existing 16x16 icon was possible but bad: the final
+image was blurry. Converting the `.ico` to an SVG was possible, but
+sub-optimal: tools try to guess some vector paths, and the final SVG didn't
+match the original.
+
+Instead I used a tool to draw the "vector pixels" as black squares, and after
+getting the final result I manually cleaned-up the generated XML:
+
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16">
+ <path d="M 0 8 L 1 8 L 1 9 L 0 9 L 0 8 Z" />
+ <path d="M 0 13 L 1 13 L 1 14 L 0 14 L 0 13 Z" />
+ <path d="M 1 8 L 2 8 L 2 9 L 1 9 L 1 8 Z" />
+ <path d="M 1 13 L 2 13 L 2 14 L 1 14 L 1 13 Z" />
+ <path d="M 2 8 L 3 8 L 3 9 L 2 9 L 2 8 Z" />
+ <path d="M 2 13 L 3 13 L 3 14 L 2 14 L 2 13 Z" />
+ <path d="M 3 8 L 4 8 L 4 9 L 3 9 L 3 8 Z" />
+ <path d="M 3 13 L 4 13 L 4 14 L 3 14 L 3 13 Z" />
+ <path d="M 4 7 L 5 7 L 5 8 L 4 8 L 4 7 Z" />
+ <path d="M 4 8 L 5 8 L 5 9 L 4 9 L 4 8 Z" />
+ <path d="M 4 13 L 5 13 L 5 14 L 4 14 L 4 13 Z" />
+ <path d="M 5 6 L 6 6 L 6 7 L 5 7 L 5 6 Z" />
+ <path d="M 5 7 L 6 7 L 6 8 L 5 8 L 5 7 Z" />
+ <path d="M 5 13 L 6 13 L 6 14 L 5 14 L 5 13 Z" />
+ <path d="M 6 5 L 7 5 L 7 6 L 6 6 L 6 5 Z" />
+ <path d="M 6 6 L 7 6 L 7 7 L 6 7 L 6 6 Z" />
+ <path d="M 6 14 L 7 14 L 7 15 L 6 15 L 6 14 Z" />
+ <path d="M 7 1 L 8 1 L 8 2 L 7 2 L 7 1 Z" />
+ <path d="M 7 14 L 8 14 L 8 15 L 7 15 L 7 14 Z" />
+ <path d="M 7 15 L 8 15 L 8 16 L 7 16 L 7 15 Z" />
+ <path d="M 7 2 L 8 2 L 8 3 L 7 3 L 7 2 Z" />
+ <path d="M 7 3 L 8 3 L 8 4 L 7 4 L 7 3 Z" />
+ <path d="M 7 4 L 8 4 L 8 5 L 7 5 L 7 4 Z" />
+ <path d="M 7 5 L 8 5 L 8 6 L 7 6 L 7 5 Z" />
+ <path d="M 8 1 L 9 1 L 9 2 L 8 2 L 8 1 Z" />
+ <path d="M 8 15 L 9 15 L 9 16 L 8 16 L 8 15 Z" />
+ <path d="M 9 1 L 10 1 L 10 2 L 9 2 L 9 1 Z" />
+ <path d="M 9 2 L 10 2 L 10 3 L 9 3 L 9 2 Z" />
+ <path d="M 9 6 L 10 6 L 10 7 L 9 7 L 9 6 Z" />
+ <path d="M 9 15 L 10 15 L 10 16 L 9 16 L 9 15 Z" />
+ <path d="M 10 2 L 11 2 L 11 3 L 10 3 L 10 2 Z" />
+ <path d="M 10 3 L 11 3 L 11 4 L 10 4 L 10 3 Z" />
+ <path d="M 10 4 L 11 4 L 11 5 L 10 5 L 10 4 Z" />
+ <path d="M 10 5 L 11 5 L 11 6 L 10 6 L 10 5 Z" />
+ <path d="M 10 6 L 11 6 L 11 7 L 10 7 L 10 6 Z" />
+ <path d="M 11 6 L 12 6 L 12 7 L 11 7 L 11 6 Z" />
+ <path d="M 11 8 L 12 8 L 12 9 L 11 9 L 11 8 Z" />
+ <path d="M 10 15 L 11 15 L 11 16 L 10 16 L 10 15 Z" />
+ <path d="M 11 10 L 12 10 L 12 11 L 11 11 L 11 10 Z" />
+ <path d="M 11 12 L 12 12 L 12 13 L 11 13 L 11 12 Z" />
+ <path d="M 11 14 L 12 14 L 12 15 L 11 15 L 11 14 Z" />
+ <path d="M 11 15 L 12 15 L 12 16 L 11 16 L 11 15 Z" />
+ <path d="M 12 6 L 13 6 L 13 7 L 12 7 L 12 6 Z" />
+ <path d="M 12 8 L 13 8 L 13 9 L 12 9 L 12 8 Z" />
+ <path d="M 12 10 L 13 10 L 13 11 L 12 11 L 12 10 Z" />
+ <path d="M 12 12 L 13 12 L 13 13 L 12 13 L 12 12 Z" />
+ <path d="M 12 14 L 13 14 L 13 15 L 12 15 L 12 14 Z" />
+ <path d="M 13 6 L 14 6 L 14 7 L 13 7 L 13 6 Z" />
+ <path d="M 13 8 L 14 8 L 14 9 L 13 9 L 13 8 Z" />
+ <path d="M 13 10 L 14 10 L 14 11 L 13 11 L 13 10 Z" />
+ <path d="M 13 12 L 14 12 L 14 13 L 13 13 L 13 12 Z" />
+ <path d="M 13 13 L 14 13 L 14 14 L 13 14 L 13 13 Z" />
+ <path d="M 13 14 L 14 14 L 14 15 L 13 15 L 13 14 Z" />
+ <path d="M 14 7 L 15 7 L 15 8 L 14 8 L 14 7 Z" />
+ <path d="M 14 8 L 15 8 L 15 9 L 14 9 L 14 8 Z" />
+ <path d="M 14 9 L 15 9 L 15 10 L 14 10 L 14 9 Z" />
+ <path d="M 14 10 L 15 10 L 15 11 L 14 11 L 14 10 Z" />
+ <path d="M 14 11 L 15 11 L 15 12 L 14 12 L 14 11 Z" />
+ <path d="M 14 12 L 15 12 L 15 13 L 14 13 L 14 12 Z" />
+</svg>
+----
+
+The good thing about this new favicon (at {favicon}[`/static/lord-favicon.svg`])
+is that a) it is simple enough that I feel comfortable editing it manually and
+b) it is an SVG, which means I can generate any desired size.
+
+With the new favicon file, I now had to add to the templates' `<head>` a
+`<link>` to this icon:
+
+[source,html]
+----
+<head>
+ <meta charset="UTF-8" />
+ <link rel="icon" type="image/svg+xml" href="/static/favicon.svg">
+ ...
+----
+
+Still missing is a bitmap image for places that can't handle vector images. I
+used Jekyll generator to create an PNG from the existing SVG:
+
+[source,ruby]
+----
+module Jekyll
+ class FaviconGenerator < Generator
+ safe true
+ priority :high
+
+ SIZE = 420
+
+ def generate(site)
+ svg = 'static/favicon.svg'
+ png = 'static/favicon.png'
+ unless File.exist? png then
+ puts "Missing '#{png}', generating..."
+ puts `inkscape -o #{png} -w #{SIZE} -h #{SIZE} #{svg}`
+ end
+ end
+ end
+end
+----
+
+I had to increase the priority of the generator so that it would run before
+other places that would use a `{% link /static/lord-favicon.png %}`, otherwise
+the file would be considered missing.
diff --git a/src/content/en/tils/2021/01/12/curl-awk-emails.adoc b/src/content/en/tils/2021/01/12/curl-awk-emails.adoc
new file mode 100644
index 0000000..d432da2
--- /dev/null
+++ b/src/content/en/tils/2021/01/12/curl-awk-emails.adoc
@@ -0,0 +1,148 @@
+= Awk snippet: send email to multiple recipients with cURL
+
+:neomutt: https://neomutt.org/
+:found-out-article: https://blog.edmdesigner.com/send-email-from-linux-command-line/
+:curl: https://curl.se/
+
+As I experiment with {neomutt}[Neomutt], I wanted to keep being able to enqueue
+emails for sending later like my previous setup, so that I didn't rely on having
+an internet connection.
+
+My requirements for the `sendmail` command were:
+
+. store the email in a file, and send it later;
+. send from different addresses, using different SMTP servers.
+
+I couldn't find an MTA that could accomplish that, but I was able to quickly
+write a solution.
+
+The first part was the easiest: store the email in a file:
+
+[source,sh]
+----
+# ~/.config/mutt/muttrc:
+set sendmail=~/bin/enqueue-email.sh
+
+# ~/bin/enqueue-email.sh:
+#!/bin/sh -eu
+
+cat - > "$HOME/mbsync/my-queued-emails/$(date -Is)"
+----
+
+Now that I had the email file store locally, I needed a program to send the
+email from the file, so that I could create a cronjob like:
+
+[source,sh]
+----
+for f in ~/mbsync/my-queued-emails/*; do
+ ~/bin/dispatch-email.sh "$f" && rm "$f"
+done
+----
+
+The `dispatch-email.sh` would have to look at the `From:` header and decide
+which SMTP server to use. As I {found-out-article}[found out] that {curl}[curl]
+supports SMTP and is able to send emails, this is what I ended up with:
+
+[source,sh]
+----
+#!/bin/sh -eu
+
+F="$1"
+
+rcpt="$(awk '
+ match($0, /^(To|Cc|Bcc): (.*)$/, m) {
+ split(m[2], tos, ",")
+ for (i in tos) {
+ print "--mail-rcpt " tos[i]
+ }
+ }
+' "$F")"
+
+if grep -qE '^From: .*<addr@server1\.org>$' "$F"; then
+ curl \
+ -s \
+ --url smtp://smtp.server1.org:587 \
+ --ssl-reqd \
+ --mail-from addr@server1.org \
+ $rcpt \
+ --user 'addr@server1.org:my-long-and-secure-passphrase' \
+ --upload-file "$F"
+elif grep -qE '^From: .*<addr@server2\.org>$' "$F"; then
+ curl \
+ -s \
+ --url smtp://smtp.server2.org:587 \
+ --ssl-reqd \
+ --mail-from addr@server2.org \
+ $rcpt \
+ --user 'addr@server2.org:my-long-and-secure-passphrase' \
+ --upload-file "$F"
+else
+ echo 'Bad "From: " address'
+ exit 1
+fi
+----
+
+Most of curl flags used are self-explanatory, except for `$rcpt`.
+
+curl connects to the SMTP server, but doesn't set the recipient address by
+looking at the message. My solution was to generate the curl flags, store them
+in `$rcpt` and use it unquoted to leverage shell word splitting.
+
+To me, the most interesting part was building the `$rcpt` flags. My first
+instinct was to try grep, but it couldn't print only matches in a regex. As I
+started to turn towards sed, I envisioned needing something else to loop over
+the sed output, and I then moved to Awk.
+
+In the short Awk snippet, 3 things were new to me: the `match(...)`,
+`split(...)` and `for () {}`. The only other function I have ever used was
+`gsub(...)`, but these new ones felt similar enough that I could almost guess
+their behaviour and arguments. `match(...)` stores the matches of a regex on
+the given array positionally, and `split(...)` stores the chunks in the given
+array.
+
+I even did it incrementally:
+
+[source,sh]
+----
+$ H='To: to@example.com, to2@example.com\nCc: cc@example.com, cc2@example.com\nBcc: bcc@example.com,bcc2@example.com\n'
+$ printf "$H" | awk '/^To: .*$/ { print $0 }'
+To: to@example.com, to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { print m }'
+awk: ligne de commande:1: (FILENAME=- FNR=1) fatal : tentative d'utilisation du tableau « m » dans un contexte scalaire
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { print m[0] }'
+To: to@example.com, to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { print m[1] }'
+to@example.com, to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos }'
+awk: ligne de commande:1: (FILENAME=- FNR=1) fatal : tentative d'utilisation du tableau « tos » dans un contexte scalaire
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[0] }'
+
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[1] }'
+to@example.com,
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[2] }'
+to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[3] }'
+----
+
+(This isn't the verbatim interactive session, but a cleaned version to make it
+more readable.)
+
+At this point, I realized I needed a for loop over the `tos` array, and I moved
+the Awk snippet into the `~/bin/dispatch-email.sh`. I liked the final thing:
+
+[source,awk]
+----
+match($0, /^(To|Cc|Bcc): (.*)$/, m) {
+ split(m[2], tos, ",")
+ for (i in tos) {
+ print "--mail-rcpt " tos[i]
+ }
+}
+----
+
+As I learn more about Awk, I feel that it is too undervalued, as many people
+turn to Perl or other programming languages when Awk suffices. The advantage is
+pretty clear: writing programs that run on any POSIX system, without extra
+dependencies required.
+
+Coding to the standards is underrated.
diff --git a/src/content/en/tils/2021/01/17/posix-shebang.adoc b/src/content/en/tils/2021/01/17/posix-shebang.adoc
new file mode 100644
index 0000000..5cf0695
--- /dev/null
+++ b/src/content/en/tils/2021/01/17/posix-shebang.adoc
@@ -0,0 +1,58 @@
+= POSIX sh and shebangs
+
+:awk-1: link:../../../2020/12/15/shellcheck-repo.html
+:awk-2: link:../12/curl-awk-emails.html
+
+As I {awk-1}[keep moving] {awk-2}[towards POSIX], I'm on the process of
+migrating all my Bash scripts to POSIX sh.
+
+As I dropped `[[`, arrays and other Bashisms, I was left staring at the first
+line of every script, wondering what to do: what is the POSIX sh equivalent of
+`#!/usr/bin/env bash`? I already knew that POSIX says nothing about shebangs,
+and that the portable way to call a POSIX sh script is `sh script.sh`, but
+I didn't know what to do with that first line.
+
+What I had previously was:
+
+[source,sh]
+----
+#!/usr/bin/env bash
+set -Eeuo pipefail
+cd "$(dirname "${BASH_SOURCE[0]}")"
+----
+
+Obviously, the `$BASH_SOURCE` would be gone, and I would have to adapt some of
+my scripts to not rely on the script location. The `-E` and `-o pipefail`
+options were also gone, and would be replaced by nothing.
+
+I converted all of them to:
+
+[source,sh]
+----
+#!/bin/sh -eu
+----
+
+I moved the `-eu` options to the shebang line itself, striving for conciseness.
+But as I changed callers from `./script.sh` to `sh script.sh`, things started to
+fail. Some tests that should fail reported errors, but didn't return 1.
+
+My first reaction was to revert back to `./script.sh`, but the POSIX bug I
+caught is a strong strain, and when I went back to it, I figured that the
+callers were missing some flags. Specifically, `sh -eu script.sh`.
+
+Then it clicked: when running with `sh script.sh`, the shebang line with the sh
+options is ignored, as it is a comment!
+
+Which means that the shebang most friendly with POSIX is:
+
+[source,sh]
+----
+#!/bin/sh
+set -eu
+----
+
+. when running via `./script.sh`, if the system has an executable at `/bin/sh`,
+ it will be used to run the script;
+. when running via `sh script.sh`, the sh options aren't ignored as previously.
+
+TIL.
diff --git a/src/content/en/tils/2021/04/24/cl-generic-precedence.adoc b/src/content/en/tils/2021/04/24/cl-generic-precedence.adoc
new file mode 100644
index 0000000..541afb0
--- /dev/null
+++ b/src/content/en/tils/2021/04/24/cl-generic-precedence.adoc
@@ -0,0 +1,149 @@
+= Common Lisp argument precedence order parameterization of a generic function
+
+When CLOS dispatches a method, it picks the most specific method definition to
+the argument list:
+
+[source,lisp]
+----
+
+* (defgeneric a-fn (x))
+#<STANDARD-GENERIC-FUNCTION A-FN (0) {5815ACB9}>
+
+* (defmethod a-fn (x) :default-method)
+#<STANDARD-METHOD A-FN (T) {581DB535}>
+
+* (defmethod a-fn ((x number)) :a-number)
+#<STANDARD-METHOD A-FN (NUMBER) {58241645}>
+
+* (defmethod a-fn ((x (eql 1))) :number-1)
+#<STANDARD-METHOD A-FN ((EQL 1)) {582A7D75}>
+
+* (a-fn nil)
+:DEFAULT-METHOD
+
+* (a-fn "1")
+:DEFAULT-METHOD
+
+* (a-fn 0)
+:A-NUMBER
+
+* (a-fn 1)
+:NUMBER-1
+----
+
+CLOS uses a similar logic when choosing the method from parent classes, when
+multiple ones are available:
+
+[source,lisp]
+----
+* (defclass class-a () ())
+
+#<STANDARD-CLASS CLASS-A {583E0B25}>
+* (defclass class-b () ())
+
+#<STANDARD-CLASS CLASS-B {583E7F6D}>
+* (defgeneric another-fn (obj))
+
+#<STANDARD-GENERIC-FUNCTION ANOTHER-FN (0) {583DA749}>
+* (defmethod another-fn ((obj class-a)) :class-a)
+; Compiling LAMBDA (.PV-CELL. .NEXT-METHOD-CALL. OBJ):
+; Compiling Top-Level Form:
+
+#<STANDARD-METHOD ANOTHER-FN (CLASS-A) {584523C5}>
+* (defmethod another-fn ((obj class-b)) :class-b)
+; Compiling LAMBDA (.PV-CELL. .NEXT-METHOD-CALL. OBJ):
+; Compiling Top-Level Form:
+
+#<STANDARD-METHOD ANOTHER-FN (CLASS-B) {584B8895}>
+----
+
+Given the above definitions, when inheriting from `class-a` and `class-b`, the
+order of inheritance matters:
+
+[source,lisp]
+----
+* (defclass class-a-coming-first (class-a class-b) ())
+#<STANDARD-CLASS CLASS-A-COMING-FIRST {584BE6AD}>
+
+* (defclass class-b-coming-first (class-b class-a) ())
+#<STANDARD-CLASS CLASS-B-COMING-FIRST {584C744D}>
+
+* (another-fn (make-instance 'class-a-coming-first))
+:CLASS-A
+
+* (another-fn (make-instance 'class-b-coming-first))
+:CLASS-B
+----
+
+Combining the order of inheritance with generic functions with multiple
+arguments, CLOS has to make a choice of how to pick a method given two competing
+definitions, and its default strategy is prioritizing from left to right:
+
+[source,lisp]
+----
+* (defgeneric yet-another-fn (obj1 obj2))
+#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (0) {584D9EC9}>
+
+* (defmethod yet-another-fn ((obj1 class-a) obj2) :first-arg-specialized)
+#<STANDARD-METHOD YET-ANOTHER-FN (CLASS-A T) {5854269D}>
+
+* (defmethod yet-another-fn (obj1 (obj2 class-b)) :second-arg-specialized)
+#<STANDARD-METHOD YET-ANOTHER-FN (T CLASS-B) {585AAAAD}>
+
+* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
+:FIRST-ARG-SPECIALIZED
+----
+
+CLOS has to make a choice between the first and the second definition of
+`yet-another-fn`, but its choice is just a heuristic. What if we want the
+choice to be based on the second argument, instead of the first?
+
+For that, we use the `:argument-precedence-order` option when declaring a
+generic function:
+
+[source,lisp]
+----
+* (defgeneric yet-another-fn (obj1 obj2) (:argument-precedence-order obj2 obj1))
+#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (2) {584D9EC9}>
+
+* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
+:SECOND-ARG-SPECIALIZED
+----
+
+I liked that the `:argument-precedence-order` option exists. We shouldn't have
+to change the arguments from `(obj1 obj2)` to `(obj2 obj1)` just to make CLOS
+pick the method that we want. We can configure its default behaviour if
+desired, and keep the order of arguments however it best fits the generic
+function.
+
+== Comparison with Clojure
+
+Clojure has an equivalent, when using `defmulti`.
+
+Since when declaring a multi-method with `defmulti` we must define the dispatch
+function, Clojure uses it to pick the method definition. Since the dispatch
+function is required, there is no need for a default behaviour, such as
+left-to-right.
+
+== Conclusion
+
+Making the argument precedence order configurable for generic functions but not
+for class definitions makes a lot of sense.
+
+When declaring a class, we can choose the precedence order, and that is about
+it. But when defining a generic function, the order of arguments is more
+important to the function semantics, and the argument precedence being
+left-to-right is just the default behaviour.
+
+One shouldn't change the order of arguments of a generic function for the sake
+of tailoring it to the CLOS priority ranking algorithm, but doing it for a class
+definition is just fine.
+
+TIL.
+
+== References
+
+:clos-wiki: https://en.wikipedia.org/wiki/Object-Oriented_Programming_in_Common_Lisp
+
+. {clos-wiki}[Object-Oriented Programming in Common Lisp: A Programmer's Guide
+ to CLOS], by Sonja E. Keene
diff --git a/src/content/en/tils/2021/04/24/clojure-autocurry.adoc b/src/content/en/tils/2021/04/24/clojure-autocurry.adoc
new file mode 100644
index 0000000..a2c2835
--- /dev/null
+++ b/src/content/en/tils/2021/04/24/clojure-autocurry.adoc
@@ -0,0 +1,135 @@
+= Clojure auto curry
+:sort: 1
+:updatedat: 2021-04-27
+
+:defcurry-orig: https://lorettahe.github.io/clojure/2016/09/22/clojure-auto-curry
+
+Here's a simple macro defined by {defcurry-orig}[Loretta He] to create Clojure
+functions that are curried on all arguments, relying on Clojure's multi-arity
+support:
+
+[source,clojure]
+----
+(defmacro defcurry
+ [name args & body]
+ (let [partials (map (fn [n]
+ `(~(subvec args 0 n) (partial ~name ~@(take n args))))
+ (range 1 (count args)))]
+ `(defn ~name
+ (~args ~@body)
+ ~@partials)))
+----
+
+A naive `add` definition, alongside its usage and macroexpansion:
+
+[source,clojure]
+----
+user=> (defcurry add
+ [a b c d e]
+ (+ 1 2 3 4 5))
+#'user/add
+
+user=> (add 1)
+#object[clojure.core$partial$fn__5857 0x2c708440 "clojure.core$partial$fn__5857@2c708440"]
+
+user=> (add 1 2 3 4)
+#object[clojure.core$partial$fn__5863 0xf4c0e4e "clojure.core$partial$fn__5863@f4c0e4e"]
+
+user=> ((add 1) 2 3 4 5)
+15
+
+user=> (((add 1) 2 3) 4 5)
+15
+
+user=> (use 'clojure.pprint)
+nil
+
+user=> (pprint
+ (macroexpand
+ '(defcurry add
+ [a b c d e]
+ (+ 1 2 3 4 5))))
+(def
+ add
+ (clojure.core/fn
+ ([a b c d e] (+ 1 2 3 4 5))
+ ([a] (clojure.core/partial add a))
+ ([a b] (clojure.core/partial add a b))
+ ([a b c] (clojure.core/partial add a b c))
+ ([a b c d] (clojure.core/partial add a b c d))))
+nil
+----
+
+This simplistic `defcurry` definition doesn't support optional parameters,
+multi-arity, `&` rest arguments, docstrings, etc., but it could certainly evolve
+to do so.
+
+I like how `defcurry` is so short, and abdicates the responsability of doing the
+multi-arity logic to Clojure's built-in multi-arity support. Simple and
+elegant.
+
+Same Clojure as before, now with auto-currying via macros.
+
+== Comparison with Common Lisp
+
+My attempt at writing an equivalent for Common Lisp gives me:
+
+[source,lisp]
+----
+(defun partial (fn &rest args)
+ (lambda (&rest args2)
+ (apply fn (append args args2))))
+
+(defun curry-n (n func)
+ (cond ((< n 0) (error "Too many arguments"))
+ ((zerop n) (funcall func))
+ (t (lambda (&rest rest)
+ (curry-n (- n (length rest))
+ (apply #'partial func rest))))))
+
+(defmacro defcurry (name args &body body)
+ `(defun ,name (&rest rest)
+ (let ((func (lambda ,args ,@body)))
+ (curry-n (- ,(length args) (length rest))
+ (apply #'partial func rest)))))
+----
+
+Without built-in multi-arity support, we have to do more work, like tracking the
+number of arguments consumed so far. We also have to write `#'partial`
+ourselves. That is, without dependending on any library, sticking to ANSI
+Common Lisp.
+
+The usage is pretty similar:
+
+[source,lisp]
+----
+* (defcurry add (a b c d e)
+ (+ a b c d e))
+ADD
+
+* (add 1)
+#<FUNCTION (LAMBDA (&REST REST) :IN CURRY-N) {100216419B}>
+
+* (funcall (add 1) 2 3 4)
+#<FUNCTION (LAMBDA (&REST REST) :IN CURRY-N) {100216537B}>
+
+* (funcall (add 1) 2 3 4 5)
+15
+
+* (funcall (funcall (add 1) 2 3) 4 5)
+15
+
+* (macroexpand-1
+ '(defcurry add (a b c d e)
+ (+ a b c d e)))
+(DEFUN ADD (&REST REST)
+ (LET ((FUNC (LAMBDA (A B C D E) (+ A B C D E))))
+ (CURRY-N (- 5 (LENGTH REST)) (APPLY #'PARTIAL FUNC REST))))
+T
+----
+
+This also require `funcall`s, since we return a `lambda` that doesn't live in
+the function namespace.
+
+Like the Clojure one, it doesn't support optional parameters, `&rest` rest
+arguments, docstrings, etc., but it also could evolve to do so.
diff --git a/src/content/en/tils/2021/04/24/scm-nif.adoc b/src/content/en/tils/2021/04/24/scm-nif.adoc
new file mode 100644
index 0000000..2ea8a6f
--- /dev/null
+++ b/src/content/en/tils/2021/04/24/scm-nif.adoc
@@ -0,0 +1,61 @@
+= Three-way conditional for number signs on Lisp
+:categories: lisp scheme common-lisp
+:sort: 2
+:updatedat: 2021-08-14
+
+:on-lisp: https://www.paulgraham.com/onlisptext.html
+:sicp: https://mitpress.mit.edu/sites/default/files/sicp/index.html
+
+A useful macro from Paul Graham's {on-lisp}[On Lisp] book:
+
+[source,lisp]
+----
+(defmacro nif (expr pos zero neg)
+ (let ((g (gensym)))
+ `(let ((,g ,expr))
+ (cond ((plusp ,g) ,pos)
+ ((zerop ,g) ,zero)
+ (t ,neg)))))
+----
+
+After I looked at this macro, I started seeing opportunities to using it in many
+places, and yet I didn't see anyone else using it.
+
+The latest example I can think of is section 1.3.3 of {sicp}[Structure and
+Interpretation of Computer Programs], which I was reading recently:
+
+[source,scheme]
+----
+(define (search f neg-point pos-point)
+ (let ((midpoint (average neg-point pos-point)))
+ (if (close-enough? neg-point post-point)
+ midpoint
+ (let ((test-value (f midpoint)))
+ (cond ((positive? test-value)
+ (search f neg-point midpoint))
+ ((negative? test-value)
+ (search f midpoint pos-point))
+ (else midpoint))))))
+----
+
+Not that the book should introduce such macro this early, but I couldn't avoid
+feeling bothered by not using the `nif` macro, which could even remove the need
+for the intermediate `test-value` variable:
+
+[source,scheme]
+----
+(define (search f neg-point pos-point)
+ (let ((midpoint (average neg-point pos-point)))
+ (if (close-enough? neg-point post-point)
+ midpoint
+ (nif (f midpoint)
+ (search f neg-point midpoint)
+ (midpoint)
+ (search f midpoint pos-point)))))
+----
+
+It also avoids `cond`'s extra clunky parentheses for grouping, which is
+unnecessary but built-in.
+
+As a macro, I personally feel it tilts the balance towards expressivenes despite
+its extra cognitive load toll.
diff --git a/src/content/en/tils/2021/07/23/git-tls-gpg.adoc b/src/content/en/tils/2021/07/23/git-tls-gpg.adoc
new file mode 100644
index 0000000..f198c2b
--- /dev/null
+++ b/src/content/en/tils/2021/07/23/git-tls-gpg.adoc
@@ -0,0 +1,45 @@
+= GPG verification of Git repositories without TLS
+
+:empty:
+:git-protocol: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols#_the_git_protocol
+:remembering: https://euandreh.xyz/remembering/
+
+For online Git repositories that use the [Git Protocol] for serving code, you
+can can use GPG to handle authentication, if you have the committer's public
+key.
+
+Here's how I'd verify that I've cloned an authentic version of
+{remembering}[remembering]footnote:not-available[
+ Funnily enough, not available anymore via the Git Protocol, now only with
+ HTTPS.
+]:
+
+[source,sh]
+----
+$ wget -qO- https://euandre.org/public.asc | gpg --import -
+gpg: clef 81F90EC3CD356060 : « EuAndreh <eu@euandre.org> » n'est pas modifiée
+gpg: Quantité totale traitée : 1
+gpg: non modifiées : 1
+$ pushd `mktemp -d`
+$ git clone git://euandreh.xyz/remembering .
+$ git verify-commit HEAD
+gpg: Signature faite le dim. 27 juin 2021 16:50:21 -03
+gpg: avec la clef RSA 5BDAE9B8B2F6C6BCBB0D6CE581F90EC3CD356060
+gpg: Bonne signature de « EuAndreh <eu@euandre.org> » [ultime]
+----
+
+On the first line we import the public key (funnily enough, available via
+HTTPS), and after cloning the code via the insecure `git://` protocol, we use
+`git verify-commit` to check the signature.
+
+The verification is successful, and we can see that the public key from the
+signature matches the fingerprint of the imported one. However
+`git verify-commit` doesn't have an option to check which public key you want to
+verify the commit against. Which means that if a MITM attack happens, the
+attacker could very easily serve a malicious repository with signed commits, and
+you'd have to verify the public key by yourself. That would need to happen for
+subsequent fetches, too.
+
+Even though this is possible, it is not very convenient, and certainly very
+brittle. Despite the fact that the Git Protocol is much faster, it being harder
+to make secure is a big downside.
diff --git a/src/content/en/tils/2021/08/11/js-bigint-reviver.adoc b/src/content/en/tils/2021/08/11/js-bigint-reviver.adoc
new file mode 100644
index 0000000..98ee79b
--- /dev/null
+++ b/src/content/en/tils/2021/08/11/js-bigint-reviver.adoc
@@ -0,0 +1,89 @@
+= Encoding and decoding JavaScript BigInt values with reviver
+:updatedat: 2021-08-13
+
+:reviver-fn: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#using_the_reviver_parameter
+:bigint: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
+:json-rfc: https://datatracker.ietf.org/doc/html/rfc8259
+
+`JSON.parse()` accepts a second parameter: a {reviver-fn}[`reviver()` function].
+It is a function that can be used to transform the `JSON` values as they're
+being parsed.
+
+As it turns out, when combined with JavaScript's {bigint}[`BigInt`] type, you
+can parse and encode JavaScript `BigInt` numbers via JSON:
+
+[source,javascript]
+----
+const bigIntReviver = (_, value) =>
+ typeof value === "string" && value.match(/^-?[0-9]+n$/)
+ ? BigInt(value.slice(0, value.length - 1))
+ : value;
+----
+
+I chose to interpret strings that contains only numbers and an ending `n`
+suffix as `BigInt` values, similar to how JavaScript interprets `123` (a number)
+differently from `123n` (a `bigint`);
+
+We do those checks before constructing the `BigInt` to avoid throwing needless
+exceptions and catching them on the parsing function, as this could easily
+become a bottleneck when parsing large JSON values.
+
+In order to do the full roundtrip, we now only need the `toJSON()` counterpart:
+
+[source,javascript]
+----
+BigInt.prototype.toJSON = function() {
+ return this.toString() + "n";
+};
+----
+
+With both `bigIntReviver` and `toJSON` defined, we can now successfully parse
+and encode JavaScript objects with `BigInt` values transparently:
+
+[source,javascript]
+----
+const s = `[
+ null,
+ true,
+ false,
+ -1,
+ 3.14,
+ "a string",
+ { "a-number": "-123" },
+ { "a-bigint": "-123n" }
+]`;
+
+const parsed = JSON.parse(s, bigIntReviver);
+const s2 = JSON.stringify(parsed);
+
+console.log(parsed);
+console.log(s2);
+
+console.log(typeof parsed[6]["a-number"])
+console.log(typeof parsed[7]["a-bigint"])
+----
+
+The output of the above is:
+
+[source,javascript]
+----
+[
+ null,
+ true,
+ false,
+ -1,
+ 3.14,
+ 'a string',
+ { 'a-number': '-123' },
+ { 'a-bigint': -123n }
+]
+[null,true,false,-1,3.14,"a string",{"a-number":"-123"},{"a-bigint":"-123n"}]
+string
+bigint
+----
+
+If you're on a web browser, you can probably try copying and pasting the above
+code on the console right now, as is.
+
+Even though {json-rfc}[`JSON`] doesn't include `BigInt` number, encoding and
+decoding them as strings is quite trivial on JavaScript.
diff --git a/src/content/en/tils/categories.adoc b/src/content/en/tils/categories.adoc
new file mode 100644
index 0000000..f29acda
--- /dev/null
+++ b/src/content/en/tils/categories.adoc
@@ -0,0 +1 @@
+= Articles by category
diff --git a/src/content/en/tils/index.adoc b/src/content/en/tils/index.adoc
new file mode 100644
index 0000000..7e85335
--- /dev/null
+++ b/src/content/en/tils/index.adoc
@@ -0,0 +1,7 @@
+= Today I Learned
+
+:anna-e-so: https://til.flourishing.stream/
+
+**T**oday **I** **L**earned: small entries of useful knowledge.
+
+Shameless rip-off of {anna-e-so}[Anna e só].