summaryrefslogtreecommitdiff
path: root/src/content/tils
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2025-03-31 21:51:40 -0300
committerEuAndreh <eu@euandre.org>2025-03-31 21:51:40 -0300
commit570ec471d1605318aeefb030cd78682ae442235b (patch)
tree51e17eabe37c6689f8799b55e6875c3480329a2c /src/content/tils
parentMakefile, mkdeps.sh: Derive index.html and feed.xml from more static "sortdat... (diff)
downloadeuandre.org-570ec471d1605318aeefb030cd78682ae442235b.tar.gz
euandre.org-570ec471d1605318aeefb030cd78682ae442235b.tar.xz
src/content/: Update all files left to asciidoc
Diffstat (limited to 'src/content/tils')
-rw-r--r--src/content/tils/2020/08/12/filename-timestamp.adoc38
-rw-r--r--src/content/tils/2020/08/13/code-jekyll.adoc102
-rw-r--r--src/content/tils/2020/08/14/browse-git.adoc67
-rw-r--r--src/content/tils/2020/08/16/git-search.adoc51
-rw-r--r--src/content/tils/2020/08/28/grep-online.adoc83
-rw-r--r--src/content/tils/2020/09/04/email-cli-fun-profit.adoc51
-rw-r--r--src/content/tils/2020/09/05/oldschool-pr.adoc77
-rw-r--r--src/content/tils/2020/10/11/search-git-history.adoc47
-rw-r--r--src/content/tils/2020/11/08/find-broken-symlink.adoc36
-rw-r--r--src/content/tils/2020/11/12/diy-nix-bash-ci.adoc45
-rw-r--r--src/content/tils/2020/11/12/git-bisect-automation.adoc23
-rw-r--r--src/content/tils/2020/11/12/useful-bashvars.adoc56
-rw-r--r--src/content/tils/2020/11/14/gpodder-media.adoc30
-rw-r--r--src/content/tils/2020/11/30/git-notes-ci.adoc63
-rw-r--r--src/content/tils/2020/12/15/shellcheck-repo.adoc90
-rw-r--r--src/content/tils/2020/12/29/svg.adoc50
-rw-r--r--src/content/tils/2021/01/12/curl-awk-emails.adoc100
-rw-r--r--src/content/tils/2021/01/17/posix-shebang.adoc63
-rw-r--r--src/content/tils/2021/04/24/cl-generic-precedence.adoc98
-rw-r--r--src/content/tils/2021/04/24/clojure-autocurry.adoc66
-rw-r--r--src/content/tils/2021/04/24/scm-nif.adoc53
-rw-r--r--src/content/tils/2021/07/23/git-tls-gpg.adoc43
-rw-r--r--src/content/tils/2021/08/11/js-bigint-reviver.adoc81
23 files changed, 623 insertions, 790 deletions
diff --git a/src/content/tils/2020/08/12/filename-timestamp.adoc b/src/content/tils/2020/08/12/filename-timestamp.adoc
index 7495fc9..ed7dee8 100644
--- a/src/content/tils/2020/08/12/filename-timestamp.adoc
+++ b/src/content/tils/2020/08/12/filename-timestamp.adoc
@@ -1,44 +1,30 @@
----
-
-title: Simple filename timestamp
-
-date: 2020-08-12
-
-updated_at:
-
-layout: post
-
-lang: en
-
-ref: simple-filename-timestamp
-
-eu_categories: shell
-
----
+= Simple filename timestamp
When writing Jekyll posts or creating log files with dates on them, I usually
-struggle with finding a direct way of accomplishing that. There's a simple
+struggle with finding a direct way of accomplishing that. There's a simple
solution: `date -I`.
-```shell
+[source,shell]
+----
./my-program.sh > my-program.$(date -I).log
cp post-template.md _posts/$(date -I)-post-slug.md
-```
+----
-Using this built-in GNU/Linux tool allows you to `touch $(date -I).md` to readily
-create a `2020-08-12.md` file.
+Using this built-in GNU/Linux tool allows you to `touch $(date -I).md` to
+readily create a `2020-08-12.md` file.
I always had to read `man date` or search the web over and over, and after doing
this repeatedly it became clear that both `date -I` and `date -Is` (`s` here
stands for seconds) are the thing that I'm looking for 95% of the time:
-```shell
+[source,shell]
+----
# inside my-program.sh
echo "Program started at $(date -Is)"
# output is:
# Program started at 2020-08-12T09:04:58-03:00
-```
+----
Both date formats are hierarchical, having the bigger time intervals to the
-left. This means that you can easily sort them (and even tab-complete them) with
-no extra effort or tool required.
+left. This means that you can easily sort them (and even tab-complete them)
+with no extra effort or tool required.
diff --git a/src/content/tils/2020/08/13/code-jekyll.adoc b/src/content/tils/2020/08/13/code-jekyll.adoc
index 6566928..6bd90b0 100644
--- a/src/content/tils/2020/08/13/code-jekyll.adoc
+++ b/src/content/tils/2020/08/13/code-jekyll.adoc
@@ -1,33 +1,35 @@
----
-title: Anchor headers and code lines in Jekyll
-date: 2020-08-13
-layout: post
-lang: en
-ref: anchor-headers-and-code-lines-in-jekyll
----
-The default Jekyll toolbox ([Jekyll][0], [kramdown][1] and [rouge][2]) doesn't
-provide with a configuration option to add anchors to headers and code blocks.
-
-[0]: https://jekyllrb.com/
-[1]: https://kramdown.gettalong.org/
-[2]: http://rouge.jneen.net/
+= Anchor headers and code lines in Jekyll
-The best way I found of doing this is by creating a simple Jekyll plugin, more
-specifically, a [Jekyll hook][3]. These allow you to jump in to the Jekyll build
-and add a processing stage before of after Jekyll performs something.
+:empty:
+:jekyll: https://jekyllrb.com/
+:kramdown: https://kramdown.gettalong.org/
+:rouge: https://rouge.jneen.net/
+:jekyll-hook: https://jekyllrb.com/docs/plugins/hooks/
+
+The default Jekyll toolbox ({jekyll}[Jekyll], {kramdown}[kramdown] and
+{rouge}[rouge]) doesn't provide with a configuration option to add anchors to
+headers and code blocks.
-[3]: https://jekyllrb.com/docs/plugins/hooks/
+The best way I found of doing this is by creating a simple Jekyll plugin, more
+specifically, a {jekyll-hook}[Jekyll hook]. These allow you to jump in to the
+Jekyll build and add a processing stage before of after Jekyll performs
+something.
All you have to do is add the code to `_plugins/my-jekyll-plugin-code.rb`, and
Jekyll knows to pick it up and call your code on the appropriate time.
-## Anchor on headers
+== Anchor on headers
+
+:jemoji: https://github.com/jekyll/jemoji
+:jekyll-mentions: https://github.com/jekyll/jekyll-mentions
+:html-regex: https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
Since I wanted to add anchors to headers in all documents, this Jekyll hook
works on `:documents` after they have been transformed into HTML, the
`:post_render` phase:
-```ruby
+[source,ruby]
+----
Jekyll::Hooks.register :documents, :post_render do |doc|
if doc.output_ext == ".html"
doc.output =
@@ -37,30 +39,30 @@ Jekyll::Hooks.register :documents, :post_render do |doc|
)
end
end
-```
-
-I've derived my implementations from two "official"[^official] hooks,
-[jemoji][4] and [jekyll-mentions][5].
+----
-[4]: https://github.com/jekyll/jemoji
-[5]: https://github.com/jekyll/jekyll-mentions
-[^official]: I don't know how official they are, I just assumed it because they
- live in the same organization inside GitHub that Jekyll does.
+I've derived my implementations from two
+"official"{empty}footnote:official[
+ I don't know how official they are, I just assumed it because they live in the
+ same organization inside GitHub that Jekyll does.
+] hooks, {jemoji}[jemoji] and {jekyll-mentions}[jekyll-mentions].
All I did was to wrap the header tag inside an `<a>`, and set the `href` of that
-`<a>` to the existing id of the header. Before the hook the HTML looks like:
+`<a>` to the existing id of the header. Before the hook the HTML looks like:
-```html
+[source,html]
+----
...some unmodified text...
<h2 id="my-header">
My header
</h2>
...more unmodified text...
-```
+----
And after the hook should turn that into:
-```html
+[source,html]
+----
...some unmodified text...
<a href="#my-header">
<h2 id="my-header">
@@ -68,20 +70,18 @@ And after the hook should turn that into:
</h2>
</a>
...more unmodified text...
-```
+----
The used regexp tries to match only h1-h6 tags, and keep the rest of the HTML
-attributes untouched, since this isn't a general HTML parser, but the generated HTML
-is somewhat under your control. Use at your own risk because
-[you shouldn't parse HTML with regexps][6]. Also I used this strategy in my
-environment, where no other plugins are installed. I haven't considered how this
-approach may conflict with other Jekyll plugins.
-
-[6]: https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
+attributes untouched, since this isn't a general HTML parser, but the generated
+HTML is somewhat under your control. Use at your own risk because
+{html-regex}[you shouldn't parse HTML with regexps]. Also I used this strategy
+in my environment, where no other plugins are installed. I haven't considered
+how this approach may conflict with other Jekyll plugins.
In the new anchor tag you can add your custom CSS class to style it as you wish.
-## Anchor on code blocks
+== Anchor on code blocks
Adding anchors to code blocks needs a little bit of extra work, because line
numbers themselves don't have preexisting ids, so we need to generate them
@@ -90,7 +90,8 @@ without duplications between multiple code blocks in the same page.
Similarly, this Jekyll hook also works on `:documents` in the `:post_render`
phase:
-```ruby
+[source,ruby]
+----
PREFIX = '<pre class="lineno">'
POSTFIX = '</pre>'
Jekyll::Hooks.register :documents, :post_render do |doc|
@@ -111,25 +112,27 @@ Jekyll::Hooks.register :documents, :post_render do |doc|
end
end
end
-```
+----
This solution assumes the default Jekyll toolbox with code line numbers turned
on in `_config.yml`:
-```yaml
+[source,yaml]
+----
kramdown:
syntax_highlighter_opts:
span:
line_numbers: false
block:
line_numbers: true
-```
+----
The anchors go from B1-L1 to BN-LN, using the `code_block_counter` to track
-which code block we're in and don't duplicate anchor ids. Before the hook the
+which code block we're in and don't duplicate anchor ids. Before the hook the
HTML looks like:
-```html
+[source,html]
+----
...some unmodified text...
<pre class="lineno">1
2
@@ -138,11 +141,12 @@ HTML looks like:
5
</pre>
...more unmodified text...
-```
+----
And after the hook should turn that into:
-```html
+[source,html]
+----
...some unmodified text...
<pre class="lineno"><a id="B1-L1" href="#B1-L1">1</a>
<a id="B1-L2" href="#B1-L2">2</a>
@@ -150,6 +154,6 @@ And after the hook should turn that into:
<a id="B1-L4" href="#B1-L4">4</a>
<a id="B1-L5" href="#B1-L5">5</a></pre>
...more unmodified text...
-```
+----
Happy writing :)
diff --git a/src/content/tils/2020/08/14/browse-git.adoc b/src/content/tils/2020/08/14/browse-git.adoc
index d06f0c1..556288c 100644
--- a/src/content/tils/2020/08/14/browse-git.adoc
+++ b/src/content/tils/2020/08/14/browse-git.adoc
@@ -1,28 +1,15 @@
----
-
-title: Browse a git repository at a specific commit
-
-date: 2020-08-14
-
-layout: post
-
-lang: en
-
-ref: browse-a-git-repository-at-a-specific-commit
-
-eu_categories: git
-
----
+= Browse a git repository at a specific commit
I commonly use tools like `git log` together with `git show` when inspecting
past changes in a repository:
-```shell
+[source,shell]
+----
git log
# search for a the commit I'm looking for
git show <my-commit>
# see the diff for the commit
-```
+----
But I also wanted to not only be able to look at the diff of a specific commit,
but to browse the whole repository at that specific commit.
@@ -30,55 +17,59 @@ but to browse the whole repository at that specific commit.
I used to accomplish it the "brute force" way: clone the whole repository in
another folder and checkout the commit there:
-```shell
+[source,shell]
+----
git clone <original-repo> /tmp/tmp-repo-clone
cd /tmp-repo-clone
git checkout <my-commit>
-```
+----
But git itself allows we to specific the directory of the checkout by using the
-`--work-tree` global git flag. This is what `man git` says about it:
+`--work-tree` global git flag. This is what `man git` says about it:
-```txt
+[source,txt]
+----
--work-tree=<path>
Set the path to the working tree. It can be an absolute path or a path relative to the current working
directory. This can also be controlled by setting the GIT_WORK_TREE environment variable and the
core.worktree configuration variable (see core.worktree in git-config(1) for a more detailed
discussion).
-```
+----
-So it allows us to set the desired path of the working tree. So if we want to
+So it allows us to set the desired path of the working tree. So if we want to
copy the contents of the current working tree into `copy/`:
-```shell
+[source,shell]
+----
mkdir copy
git --work-tree=copy/ checkout .
-```
+----
-After that `copy/` will contain a replica of the code in HEAD. But to checkout a
-specific, we need some extra parameters:
+After that `copy/` will contain a replica of the code in HEAD. But to checkout
+a specific, we need some extra parameters:
-```shell
+[source,shell]
+----
git --work-tree=<dir> checkout <my-commit> -- .
-```
+----
There's an extra `-- .` at the end, which initially looks like we're sending
-Morse signals to git, but we're actually saying to `git-checkout` which
-sub directory of `<my-commit>` we want to look at. Which means we can do
-something like:
+Morse signals to git, but we're actually saying to `git-checkout` which sub
+directory of `<my-commit>` we want to look at. Which means we can do something
+like:
-```shell
+[source,shell]
+----
git --work-tree=<dir> checkout <my-commit> -- src/
-```
+----
And with that `<dir>` will only contain what was inside `src/` at `<commit>`.
After any of those checkouts, you have to `git reset .` to reset your current
staging area back to what it was before the checkout.
+== References
-## References
-
-1. [GIT: Checkout to a specific folder][0] (StackOverflow)
+:so-link: https://stackoverflow.com/a/16493707
-[0]: https://stackoverflow.com/a/16493707
+. {so-link}[GIT: Checkout to a specific folder] (StackOverflow)
diff --git a/src/content/tils/2020/08/16/git-search.adoc b/src/content/tils/2020/08/16/git-search.adoc
index f3ae6f0..5ad0eae 100644
--- a/src/content/tils/2020/08/16/git-search.adoc
+++ b/src/content/tils/2020/08/16/git-search.adoc
@@ -1,59 +1,48 @@
----
-
-title: Search in git
-
-date: 2020-08-16
-
-layout: post
-
-lang: en
-
-ref: search-in-git
-
-eu_categories: git
-
----
+= Search in git
Here's a useful trio to know about to help you search things in git:
-1. `git show <commit>`
-2. `git log --grep='<regexp>'`
-3. `git grep '<regexp>' [commit]`
+. `git show <commit>`
+. `git log --grep='<regexp>'`
+. `git grep '<regexp>' [commit]`
-## 1. `git show <commit>`
+== 1. `git show <commit>`
Show a specific commit and it's diff:
-```shell
+[source,shell]
+----
git show
# shows the latest commit
git show <commit>
# shows an specific <commit>
git show v1.2
# shows commit tagged with v1.2
-```
+----
-## 2. `git log --grep='<regexp>'`
+== 2. `git log --grep='<regexp>'`
Search through the commit messages:
-```shell
+[source,shell]
+----
git log --grep='refactor'
-```
+----
-## 3. `git grep '<regexp>' [commit]`
+== 3. `git grep '<regexp>' [commit]`
+
+:browse-article: link:../14/browse-git.html
Search content in git history:
-```shell
+[source,shell]
+----
git grep 'TODO'
# search the repository for the "TODO" string
git grep 'TODO' $(git rev-list --all)
# search the whole history for "TODO" string
-```
+----
And if you find an occurrence of the regexp in a specific commit and you want to
-browse the repository in that point in time, you can
-[use git checkout for that][0].
-
-[0]: {% link _tils/2020-08-14-browse-a-git-repository-at-a-specific-commit.md %}
+browse the repository in that point in time, you can {browse-article}[use git
+checkout for that].
diff --git a/src/content/tils/2020/08/28/grep-online.adoc b/src/content/tils/2020/08/28/grep-online.adoc
index 8b3b63f..cade066 100644
--- a/src/content/tils/2020/08/28/grep-online.adoc
+++ b/src/content/tils/2020/08/28/grep-online.adoc
@@ -1,31 +1,19 @@
----
+= Grep online repositories
-title: Grep online repositories
-
-date: 2020-08-28
-
-layout: post
-
-lang: en
-
-ref: grep-online-repositories
-
-eu_categories: git
-
----
+:cgit: https://git.zx2c4.com/cgit/
I often find interesting source code repositories online that I want to grep for
some pattern but I can't, because either:
-- the repository is on [cgit][cgit] or a similar code repository that doesn't
- allow search in files, or;
-- the search function is really bad, and doesn't allow me to use regular expressions for searching patterns in the code.
-
-[cgit]: https://git.zx2c4.com/cgit/
+* the repository is on {cgit}[cgit] or a similar code repository that doesn't
+ allow search in files, or;
+* the search function is really bad, and doesn't allow me to use regular
+ expressions for searching patterns in the code.
Here's a simple script that allows you to overcome that problem easily:
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -eu
@@ -66,7 +54,7 @@ pushd "/tmp/git-search/${DIRNAME}"
shift 3 || shift 2 # when "--" is missing
git grep "${REGEX_PATTERN}" "${@}"
-```
+----
It is a wrapper around `git grep` that downloads the repository when missing.
Save in a file called `git-search`, make the file executable and add it to your
@@ -74,33 +62,35 @@ path.
Overview:
-- *lines 1~2*:
-
- Bash shebang and the `set -eu` options to exit on error or undefined
- variables.
+* _lines 1~2_:
++
+Bash shebang and the `set -eu` options to exit on error or undefined
+variables.
-- *lines 4~30*:
+* _lines 4~30_:
++
+Usage text to be printed when providing less arguments than expected.
- Usage text to be printed when providing less arguments than expected.
+* _line 33_:
++
+Extract the repository name from the URL, removing trailing slashes.
-- *line 33*:
+* _lines 34~37_:
++
+Download the repository when missing and go to the folder.
- Extract the repository name from the URL, removing trailing slashes.
+* _line 39_:
++
+Make the variable `$@` contain the rest of the unused arguments.
-- *lines 34~37*:
-
- Download the repository when missing and go to the folder.
-
-- *line 39*:
-
- Make the variable `$@` contain the rest of the unused arguments.
-
-- *line 40*:
-
- Perform `git grep`, forwarding the remaining arguments from `$@`.
+* _line 40_:
++
+Perform `git grep`, forwarding the remaining arguments from `$@`.
Example output:
-```shell
+
+[source,shell]
+----
$ git search 'make get-git' https://git.zx2c4.com/cgit/
Clonage dans '/tmp/git-search/cgit'...
remote: Enumerating objects: 542, done.
@@ -116,12 +106,15 @@ README: $ make get-git
$ git search 'make get-git' https://git.zx2c4.com/cgit/
/tmp/git-search/cgit ~/dev/libre/songbooks/docs
README: $ make get-git
-```
+----
-Subsequent greps on the same repository are faster because no download is needed.
+Subsequent greps on the same repository are faster because no download is
+needed.
When no argument is provided, it prints the usage text:
-```shell
+
+[source,shell]
+----
$ git search
Missing argument REGEX_PATTERN.
@@ -136,4 +129,4 @@ Examples:
Searching "make get-git" in cgit repository:
git search 'make get-git' https://git.zx2c4.com/cgit/
git search 'make get-git' https://git.zx2c4.com/cgit/ -- $(git rev-list --all)
-```
+----
diff --git a/src/content/tils/2020/09/04/email-cli-fun-profit.adoc b/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
index 320f3ab..5476fac 100644
--- a/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
+++ b/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
@@ -1,25 +1,22 @@
----
-title: Send emails using the command line for fun and profit!
-date: 2020-09-04
-layout: post
-lang: en
-ref: send-emails-using-the-command-line-for-fun-and-profit
----
-Here are a few reasons why:
+= Send emails using the command line for fun and profit!
-1. send yourself and other people notification of cronjobs, scripts runs, CI
- jobs, *etc.*
+:ssmtp: https://wiki.archlinux.org/index.php/SSMTP
+:mailutils: https://mailutils.org/
-2. leverage the POSIX pipe `|`, and pipe emails away!
+Here are a few reasons why:
-3. because you can.
+. send yourself and other people notification of cronjobs, scripts runs, CI
+ jobs, _etc._
+. leverage the POSIX pipe `|`, and pipe emails away!
+. because you can.
Reason 3 is the fun part, reasons 1 and 2 are the profit part.
-First [install and configure SSMTP][ssmtp] for using, say, Gmail as the email
+First {ssmpt}[install and configure SSMTP] for using, say, Gmail as the email
server:
-```shell
+[source,shell]
+----
# file /etc/ssmtp/ssmtp.conf
FromLineOverride=YES
MailHub=smtp.gmail.com:587
@@ -29,19 +26,21 @@ rewriteDomain=gmail.com
root=username@gmail.com
AuthUser=username
AuthPass=password
-```
+----
-Now install [GNU Mailutils][gnu-mailutils] (`sudo apt-get install mailutils` or the
+Now install {mailutils}[GNU Mailutils] (`sudo apt-get install mailutils` or the
equivalent on your OS), and send yourself your first email:
-```shell
+[source,shell]
+----
echo body | mail -aFrom:email@example.com email@example.com -s subject
-```
+----
-And that's about it, you've got mail. Here are some more places where it might
+And that's about it, you've got mail. Here are some more places where it might
be applicable:
-```shell
+[source,shell]
+----
# report a backup cronjob, attaching logs
set -e
@@ -61,20 +60,16 @@ finish() {
trap finish EXIT
do-long-backup-cmd-here
-```
+----
-```
+....
# share the output of a cmd with someone
some-program | mail someone@example.com -s "The weird logs that I was talking about"
-```
+....
...and so on.
You may consider adding a `alias mail='mail -aFrom:email@example.com'` so you
-don't keep re-entering the "From: " part.
+don't keep re-entering the ``From:'' part.
Send yourself some emails to see it working!
-
-[ssmtp]: https://wiki.archlinux.org/index.php/SSMTP
-[gnu-mailutils]: https://mailutils.org/
-[forwarding-wiki-section]: https://wiki.archlinux.org/index.php/SSMTP#Forward_to_a_Gmail_mail_server
diff --git a/src/content/tils/2020/09/05/oldschool-pr.adoc b/src/content/tils/2020/09/05/oldschool-pr.adoc
index 5b4e445..c1c56b7 100644
--- a/src/content/tils/2020/09/05/oldschool-pr.adoc
+++ b/src/content/tils/2020/09/05/oldschool-pr.adoc
@@ -1,43 +1,32 @@
----
+= Pull requests with Git, the old school way
-title: Pull requests with Git, the old school way
+:empty:
+:cgit: https://git.zx2c4.com/cgit/
-date: 2020-09-05
-
-layout: post
-
-lang: en
-
-ref: pull-requests-with-git-the-old-school-way
-
-eu_categories: git
-
----
It might be news to you, as it was to me, that "pull requests" that you can
-create on a Git hosting provider's web UI[^pr-webui] like
-GitLab/Bitbucket/GitHub actually comes from Git itself: `git request-pull`.
-
-[^pr-webui]: And maybe even using the Git hosting provider's API from the
- command line!
+create on a Git hosting provider’s web
+UI{empty}footnote:pr-webui[
+ And maybe even using the Git hosting provider’s API from the command line!
+] like GitLab/Bitbucket/GitHub actually comes from Git itself:
+`git request-pull`.
At the very core, they accomplish the same thing: both the original and the web
UI ones are ways for you to request the project maintainers to pull in your
-changes from your fork. It's like saying: "hi there, I did some changes on my
+changes from your fork. It’s like saying: "hi there, I did some changes on my
clone of the repository, what do you think about bringing those in?".
-The only difference is that you're working with only Git itself, so you're not
+The only difference is that you’re working with only Git itself, so you’re not
tied to any Git hosting provider: you can send pull requests across them
-transparently! You could even use your own [cgit][cgit] installation. No need to
-be locked in by any of them, putting the "D" back in "DVCS": it's a
-**distributed** version control system.
-
-[cgit]: https://git.zx2c4.com/cgit/
+transparently! You could even use your own {cgit}[cgit] installation. No need
+to be locked in by any of them, putting the "D" back in "DVCS": it’s a
+*distributed* version control system.
-## `git request-pull` introduction
+== `git request-pull` introduction
-Here's the raw output of a `git request-pull`:
+Here’s the raw output of a `git request-pull`:
-```shell
+[source,shell]
+----
$ git request-pull HEAD public-origin
The following changes since commit 302c9f2f035c0360acd4e13142428c100a10d43f:
@@ -57,35 +46,39 @@ EuAndreh (1):
_posts/2020-08-31-the-database-i-wish-i-had.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
-```
+----
That very first line is saying: "create me a pull request with only a single
commit, defined by `HEAD`, and use the URL defined by `public-origin`".
-Here's a pitfall: you may try using your `origin` remote at first where I put
+Here’s a pitfall: you may try using your `origin` remote at first where I put
`public-origin`, but that is many times pointing to something like
`git@example.com`, or `git.example.com:repo.git` (check that with
-`git remote -v | grep origin`). On both cases those are addresses available for
+`git remote -v | grep origin`). On both cases those are addresses available for
interaction via SSH, and it would be better if your pull requests used an
address ready for public consumption.
A simple solution for that is for you to add the `public-origin` alias as the
HTTPS alternative to the SSH version:
-```shell
+[source,shell]
+----
$ git remote add public-origin https://example.com/user/repo
-```
+----
Every Git hosting provider exposes repositories via HTTPS.
Experiment it yourself, and get acquainted with the CLI.
-## Delivering decentralized pull requests
+== Delivering decentralized pull requests
+
+:cli-email: link:../04/cli-email-fun-profit.html
Now that you can create the content of a pull request, you can just
-[deliver it][cli-email] to the interested parties email:
+{cli-email}[deliver it] to the interested parties email:
-```shell
+[source,shell]
+----
# send a PR with your last commit to the author's email
git request-pull HEAD public-origin | mail author@example.com -s "PR: Add thing to repo"
@@ -97,14 +90,14 @@ git request-pull -p HEAD~5 public-origin | \
# send every commit that is new in "other-branch"
git request-pull master public-origin other-branch | \
mail list@example.com -s 'PR: All commits from my "other-brach"'
-```
+----
-[cli-email]: {% link _tils/2020-09-04-send-emails-using-the-command-line-for-fun-and-profit.md %}
+== Conclusion
-## Conclusion
+:distgit: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
-In practice, I've never used or seen anyone use pull requests this way:
-everybody is just [sending patches via email][decentralized-git].
+In practice, I’ve never used or seen anyone use pull requests this way:
+everybody is just {distgit}[sending patches via email].
If you stop to think about this model, the problem of "Git hosting providers
becoming too centralized" is a non-issue, and "Git federation" proposals are a
@@ -114,5 +107,3 @@ Using Git this way is not scary or so weird as the first impression may suggest.
It is actually how Git was designed to be used.
Check `git help request-pull` for more info.
-
-[decentralized-git]: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
diff --git a/src/content/tils/2020/10/11/search-git-history.adoc b/src/content/tils/2020/10/11/search-git-history.adoc
index 251abe9..b6af8f9 100644
--- a/src/content/tils/2020/10/11/search-git-history.adoc
+++ b/src/content/tils/2020/10/11/search-git-history.adoc
@@ -1,41 +1,28 @@
----
+= Search changes to a filename pattern in Git history
-title: Search changes to a filename pattern in Git history
+:yet: link:../../08/14/browse-git.html
+:another: link:../../08/16/git-search.html
+:entry: link:../../08/28/grep-online.html
-date: 2020-10-11
+This is {yet}[yet] {another}[another] {entry}["search in Git"] TIL entry. You
+could say that Git has a unintuitive CLI, or that is it very powerful.
-layout: post
+I wanted to search for an old file that I new that was in the history of the
+repository, but was deleted some time ago. So I didn't really remember the
+name, only bits of it.
-lang: en
+I immediately went to the list of TILs I had written on searching in Git, but it
+wasn't readily obvious how to do it, so here it goes:
-ref: search-changes-to-a-filename-pattern-in-git-history
-
-eu_categories: git
-
----
-
-This is [yet][git-til-1] [another][git-til-2] ["search in Git"][git-til-3] TIL
-entry. You could say that Git has a unintuitive CLI, or that is it very
-powerful.
-
-I wanted to search for an old file that I new that was in the
-history of the repository, but was deleted some time ago. So I didn't really
-remember the name, only bits of it.
-
-I immediately went to the list of TILs I had written on searching in Git, but
-it wasn't readily obvious how to do it, so here it goes:
-
-```shell
+[source,shell]
+----
git log -- *pattern*
-```
+----
You could add globs before the pattern to match things on any directory, and add
our `-p` friend to promptly see the diffs:
-```shell
+[source,shell]
+----
git log -p -- **/*pattern*
-```
-
-[git-til-1]: {% link _tils/2020-08-14-browse-a-git-repository-at-a-specific-commit.md %}
-[git-til-2]: {% link _tils/2020-08-16-search-in-git.md %}
-[git-til-3]: {% link _tils/2020-08-28-grep-online-repositories.md %}
+----
diff --git a/src/content/tils/2020/11/08/find-broken-symlink.adoc b/src/content/tils/2020/11/08/find-broken-symlink.adoc
index bc97fc6..d8f6b20 100644
--- a/src/content/tils/2020/11/08/find-broken-symlink.adoc
+++ b/src/content/tils/2020/11/08/find-broken-symlink.adoc
@@ -1,36 +1,24 @@
----
+= Find broken symlinks with "find"
-title: Find broken symlinks with "find"
-
-date: 2020-11-08
-
-layout: post
-
-lang: en
-
-ref: find-broken-symlinks-with-find
-
-eu_categories: shell
-
----
+:annex: https://git-annex.branchable.com/
+:annex-wanted: https://git-annex.branchable.com/git-annex-wanted/
The `find` command knows how to show broken symlinks:
-```shell
+[source,shell]
+----
find . -xtype l
-```
+----
-This was useful to me when combined with [Git Annex][git-annex]. Its
-[`wanted`][git-annex-wanted] option allows you to have a "sparse" checkout of
-the content, and save space by not having to copy every annexed file locally:
+This was useful to me when combined with {annex}[Git Annex]. Its
+{annex-wanted}[`wanted`] option allows you to have a "sparse" checkout of the
+content, and save space by not having to copy every annexed file locally:
-```shell
+[source,shell]
+----
git annex wanted . 'exclude=Music/* and exclude=Videos/*'
-```
+----
You can `find` any broken symlinks outside those directories by querying with
Git Annex itself, but `find . -xtype l` works on other places too, where broken
symlinks might be a problem.
-
-[git-annex]: https://git-annex.branchable.com/
-[git-annex-wanted]: https://git-annex.branchable.com/git-annex-wanted/
diff --git a/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc b/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
index 3336482..73919d5 100644
--- a/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
+++ b/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
@@ -1,27 +1,17 @@
----
+= DIY bare bones CI server with Bash and Nix
-title: DIY bare bones CI server with Bash and Nix
-
-date: 2020-11-12 3
-
-layout: post
-
-lang: en
-
-ref: diy-bare-bones-ci-server-with-bash-and-nix
-
-eu_categories: ci
-
----
+:post-receive: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
+:example-project: https://euandreh.xyz/remembering/ci.html
With a server with Nix installed (no need for NixOS), you can leverage its build
-isolation for running CI jobs by adding a [post-receive][post-receive] Git hook
+isolation for running CI jobs by adding a {post-receive}[post-receive] Git hook
to the server.
In most of my project I like to keep a `test` attribute which runs the test with
-`nix-build -A test`. This way, a post-receive hook could look like:
+`nix-build -A test`. This way, a post-receive hook could look like:
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -Eeuo pipefail
set -x
@@ -42,33 +32,30 @@ finish() {
trap finish EXIT
nix-build -A test
-```
+----
-We initially (lines #5 to #8) create a log file, named after *when* the run is
-running and for *which* commit it is running for. The `exec` and `tee` combo
-allows the output of the script to go both to `stdout` *and* the log file. This
+We initially (lines #5 to #8) create a log file, named after _when_ the run is
+running and for _which_ commit it is running for. The `exec` and `tee` combo
+allows the output of the script to go both to `stdout` _and_ the log file. This
makes the logs output show up when you do a `git push`.
Lines #10 to #13 create a fresh clone of the repository and line #20 runs the
test command.
After using a similar post-receive hook for a while, I now even generate a
-simple HTML file to make the logs available ([example project][ci-logs])
+simple HTML file to make the logs available ({example-project}[example project])
through the browser.
-[post-receive]: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
-[ci-logs]: https://euandreh.xyz/remembering/ci.html
-
-## Upsides
+== Upsides
No vendor lock-in, as all you need is a server with Nix installed.
And if you pin the Nixpkgs version you're using, this very simple setup yields
extremely sandboxed runs on a very hermetic environment.
-## Downsides
+== Downsides
Besides the many missing shiny features of this very simplistic CI, `nix-build`
-can be very resource intensive. Specifically, it consumes too much memory. So if
-it has to download too many things, or the build closure gets too big, the
+can be very resource intensive. Specifically, it consumes too much memory. So
+if it has to download too many things, or the build closure gets too big, the
server might very well run out of memory.
diff --git a/src/content/tils/2020/11/12/git-bisect-automation.adoc b/src/content/tils/2020/11/12/git-bisect-automation.adoc
index 9c34b2a..c70bb2d 100644
--- a/src/content/tils/2020/11/12/git-bisect-automation.adoc
+++ b/src/content/tils/2020/11/12/git-bisect-automation.adoc
@@ -1,18 +1,4 @@
----
-
-title: Git bisect automation
-
-date: 2020-11-12 2
-
-layout: post
-
-lang: en
-
-ref: git-bisect-automation
-
-eu_categories: git
-
----
+= Git bisect automation
It is good to have an standardized way to run builds and tests on the repository
of a project, so that you can find when a bug was introduced by using
@@ -22,14 +8,15 @@ I've already been in the situation when a bug was introduced and I didn't know
how it even was occurring, and running Git bisect over hundreds of commits to
pinpoint the failing commit was very empowering:
-```
+....
$ GOOD_COMMIT_SHA=e1fd0a817d192c5a5df72dd7422e36558fa78e46
$ git bisect start HEAD $GOOD_COMMIT_SHA
$ git bisect run sn -c './build.sh && ./run-failing-case.sh'
-```
+....
Git will than do a binary search between the commits, and run the commands you
provide it with to find the failing commit.
Instead of being afraid of doing a bisect, you should instead leverage it, and
-make Git help you dig through the history of the repository to find the bad code.
+make Git help you dig through the history of the repository to find the bad
+code.
diff --git a/src/content/tils/2020/11/12/useful-bashvars.adoc b/src/content/tils/2020/11/12/useful-bashvars.adoc
index 33a072e..5061b64 100644
--- a/src/content/tils/2020/11/12/useful-bashvars.adoc
+++ b/src/content/tils/2020/11/12/useful-bashvars.adoc
@@ -1,59 +1,49 @@
----
+= Useful Bash variables
-title: Useful Bash variables
+:bash: https://www.gnu.org/software/bash/
+:bash-bang-bang: https://www.gnu.org/software/bash/manual/bash.html#Event-Designators
+:bash-dollar-underscore: https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters
-date: 2020-11-12 1
+{bash}[GNU Bash] has a few two letter variables that may be useful when typing
+on the terminal.
-layout: post
+== `!!`: the text of the last command
-lang: en
-
-ref: useful-bash-variables
-
-eu_categories: shell
-
----
-
-[GNU Bash][gnu-bash] has a few two letter variables that may be useful when
-typing on the terminal.
-
-[gnu-bash]: https://www.gnu.org/software/bash/
-
-## `!!`: the text of the last command
-
-The [`!!` variable][previous-command] refers to the previous command, and I find
+The {bash-bang-bang}[`!!` variable] refers to the previous command, and I find
useful when following chains for symlinks:
-[previous-command]: https://www.gnu.org/software/bash/manual/bash.html#Event-Designators
-
-```shell
+[source,shell]
+----
$ which git
/run/current-system/sw/bin/git
$ readlink $(!!)
readlink $(which git)
/nix/store/5bgr1xpm4m0r72h9049jbbhagxdyrnyb-git-2.28.0/bin/git
-```
+----
It is also useful when you forget to prefix `sudo` to a command that requires
it:
-```shell
+[source,shell]
+----
$ requires-sudo.sh
requires-sudo.sh: Permission denied
$ sudo !!
sudo ./requires-sudo.sh
# all good
-```
+----
Bash prints the command expansion before executing it, so it is better for you
to follow along what it is doing.
-## `$_`: most recent parameter
+== `$_`: most recent parameter
-The [`$_` variable][recent-parameter] will give you the most recent parameter
-you provided to a previous argument, which can save you typing sometimes:
+The {bash-dollar-underscore}[`$_` variable] will give you the most recent
+parameter you provided to a previous argument, which can save you typing
+sometimes:
-```shell
+[source,shell]
+----
# instead of...
$ mkdir -p a/b/c/d/
$ cd a/b/c/d/
@@ -61,11 +51,9 @@ $ cd a/b/c/d/
# ...you can:
$ mkdir -p a/b/c/d/
$ cd $_
-```
-
-[recent-parameter]: https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters
+----
-## Conclusion
+== Conclusion
I wouldn't use those in a script, as it would make the script terser to read, I
find those useful shortcut that are handy when writing at the interactive
diff --git a/src/content/tils/2020/11/14/gpodder-media.adoc b/src/content/tils/2020/11/14/gpodder-media.adoc
index a74b225..f722f35 100644
--- a/src/content/tils/2020/11/14/gpodder-media.adoc
+++ b/src/content/tils/2020/11/14/gpodder-media.adoc
@@ -1,24 +1,15 @@
----
+= gPodder as a media subscription manager
-title: gPodder as a media subscription manager
+:re-discover: https://www.charlieharrington.com/unexpected-useless-and-urgent
+:gpodder: https://gpodder.github.io/
-date: 2020-11-14
+As we {re-discover}[re-discover] the value of Atom/RSS feeds, most useful feed
+clients I know of don't support media, specifically audio and video.
-layout: post
+{gpodder}[gPodder] does.
-lang: en
-
-ref: gpodder-as-a-media-subscription-manager
-
----
-
-As we [re-discover][rss] the value of Atom/RSS feeds, most useful feed clients I
-know of don't support media, specifically audio and video.
-
-[gPodder][gpodder] does.
-
-It is mostly know as a desktop podcatcher. But the thing about podcasts is that
-the feed is provided through an RSS/Atom feed. So you can just use gPodder as
+It is mostly know as a desktop podcatcher. But the thing about podcasts is that
+the feed is provided through an RSS/Atom feed. So you can just use gPodder as
your media feed client, where you have control of what you look at.
I audio and video providers I know of offer an RSS/Atom view of their content,
@@ -26,8 +17,5 @@ so you can, say, treat any YouTube channel like a feed on its own.
gPodder will then managed your feeds, watched/unwatched, queue downloads, etc.
-Being obvious now, it was a big finding for me. If it got you interested, I
+Being obvious now, it was a big finding for me. If it got you interested, I
recommend you giving gPodder a try.
-
-[rss]: https://www.charlieharrington.com/unexpected-useless-and-urgent
-[gpodder]: https://gpodder.github.io/
diff --git a/src/content/tils/2020/11/30/git-notes-ci.adoc b/src/content/tils/2020/11/30/git-notes-ci.adoc
index f8dd063..bfce42a 100644
--- a/src/content/tils/2020/11/30/git-notes-ci.adoc
+++ b/src/content/tils/2020/11/30/git-notes-ci.adoc
@@ -1,28 +1,19 @@
----
+= Storing CI data on Git notes
-title: Storing CI data on Git notes
+:git-notes: https://git-scm.com/docs/git-notes
+:nix-bash-ci: link:../12/diy-nix-bash-ci.html
+:cgit: https://git.zx2c4.com/cgit/
-date: 2020-11-30
+Extending the bare bones CI server I've {nix-bash-ci}[talked about before],
+divoplade on Freenode suggested storing CI artifacts on {git-notes}[Git notes],
+such as tarballs, binaries, logs, _etc_.
-layout: post
+I've written a small script that will put log files and CI job data on Git
+notes, and make it visible on the porcelain log. It is a simple extension of
+the previous article:
-lang: en
-
-ref: storing-ci-data-on-git-notes
-
-eu_categories: git,ci
-
----
-
-Extending the bare bones CI server I've [talked about before][previous-article],
-divoplade on Freenode suggested storing CI artifacts on [Git notes][git-notes],
-such as tarballs, binaries, logs, *etc*.
-
-I've written a small script that will put log files and CI job data on Git notes,
-and make it visible on the porcelain log. It is a simple extension of the
-previous article:
-
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -Eeuo pipefail
set -x
@@ -63,16 +54,17 @@ git config --global user.name 'EuAndreh CI'
./container make check site
./container make publish
-```
+----
-The important part is in the `finish()` function:
-- #25 stores the exit status and the generated filename separated by spaces;
-- #26 adds the log file in a note using the `refs/notes/ci-logs` ref;
-- #27 it adds a note to the commit saying how to see the logs.
+The important part is in the `finish()` function: - #25 stores the exit status
+and the generated filename separated by spaces; - #26 adds the log file in a
+note using the `refs/notes/ci-logs` ref; - #27 it adds a note to the commit
+saying how to see the logs.
A commit now has an attached note, and shows it whenever you look at it:
-```diff
+[source,diff]
+----
$ git show 87c57133abd8be5d7cc46afbf107f59b26066575
commit 87c57133abd8be5d7cc46afbf107f59b26066575
Author: EuAndreh <eu@euandre.org>
@@ -100,23 +92,20 @@ index d1830ca..a4ccde7 100644
(service dhcp-client-service-type)
#;
(service opensmtpd-service-type
-```
+----
-Other tools such as [cgit][cgit] will also show notes on the web interface:
-<https://euandre.org/git/servers/commit?id=87c57133abd8be5d7cc46afbf107f59b26066575>.
+Other tools such as {cgit}[cgit] will also show notes on the web interface:
+https://euandre.org/git/servers/commit?id=87c57133abd8be5d7cc46afbf107f59b26066575.
You can go even further: since cgit can serve raw blob directly, you can even
serve such artifacts (log files, release artifacts, binaries) from cgit itself:
-```shell
+[source,shell]
+----
$ SHA="$(git notes --ref=refs/notes/ci-logs list 87c57133abd8be5d7cc46afbf107f59b26066575)"
$ echo "https://euandre.org/git/servers/blob?id=$SHA"
https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c
-```
+----
And like that you'll have cgit serving the artifacts for you:
-<https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c>.
-
-[previous-article]: {% link _tils/2020-11-12-diy-bare-bones-ci-server-with-bash-and-nix.md %}
-[git-notes]: https://git-scm.com/docs/git-notes
-[cgit]: https://git.zx2c4.com/cgit/
+https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c.
diff --git a/src/content/tils/2020/12/15/shellcheck-repo.adoc b/src/content/tils/2020/12/15/shellcheck-repo.adoc
index 71d10a3..960812e 100644
--- a/src/content/tils/2020/12/15/shellcheck-repo.adoc
+++ b/src/content/tils/2020/12/15/shellcheck-repo.adoc
@@ -1,52 +1,41 @@
----
+= Awk snippet: ShellCheck all scripts in a repository
-title: 'Awk snippet: ShellCheck all scripts in a repository'
+:awk-20-min: https://ferd.ca/awk-in-20-minutes.html
+:shellcheck: https://www.shellcheck.net/
-date: 2020-12-15
-
-updated_at: 2020-12-16
-
-layout: post
-
-lang: en
-
-ref: awk-snippet-shellcheck-all-scripts-in-a-repository
-
-eu_categories: shell
-
----
-
-Inspired by Fred Herbert's "[Awk in 20 Minutes][awk-20min]", here's a problem I
+Inspired by Fred Herbert's "{awk-20-min}[Awk in 20 Minutes]", here's a problem I
just solved with a line of Awk: run ShellCheck in all scripts of a repository.
In my repositories I usually have Bash and POSIX scripts, which I want to keep
-tidy with [ShellCheck][shellcheck]. Here's the first version of
+tidy with {shellcheck}[ShellCheck]. Here's the first version of
`assert-shellcheck.sh`:
-```shell
+[source,shell]
+----
#!/bin/sh -eux
find . -type f -name '*.sh' -print0 | xargs -0 shellcheck
-```
+----
This is the type of script that I copy around to all repositories, and I want it
to be capable of working on any repository, without requiring a list of files to
run ShellCheck on.
-This first version worked fine, as all my scripts had the '.sh' ending. But I
+This first version worked fine, as all my scripts had the `.sh' ending. But I
recently added some scripts without any extension, so `assert-shellcheck.sh`
-called for a second version. The first attempt was to try grepping the shebang
-line:
+called for a second version. The first attempt was to try grepping the shebang line:
-```shell
+[source,shell]
+----
$ grep '^#!/' assert-shellcheck.sh
#!/usr/sh
-```
+----
-Good, we have a grep pattern on the first try. Let's try to find all the
+Good, we have a grep pattern on the first try. Let's try to find all the
matching files:
-```shell
+[source,shell]
+----
$ find . -type f | xargs grep -l '^#!/'
./TODOs.org
./.git/hooks/pre-commit.sample
@@ -76,7 +65,7 @@ $ find . -type f | xargs grep -l '^#!/'
./scripts/songbooks.in
./scripts/with-container.sh
./scripts/assert-shellcheck.sh
-```
+----
This approach has a problem, though: it includes files ignored by Git, such as
`builld-aux/install-sh~`, and even goes into the `.git/` directory and finds
@@ -84,7 +73,8 @@ sample hooks in `.git/hooks/*`.
To list the files that Git is tracking we'll try `git ls-files`:
-```shell
+[source,shell]
+----
$ git ls-files | xargs grep -l '^#!/'
TODOs.org
bootstrap
@@ -99,23 +89,25 @@ scripts/compile-readme.sh
scripts/generate-tasks-and-bugs.sh
scripts/songbooks.in
scripts/with-container.sh
-```
+----
It looks to be almost there, but the `TODOs.org` entry shows a flaw in it: grep
-is looking for a `'^#!/'` pattern on any part of the file. In my case,
+is looking for a +'^#!/'+ pattern on any part of the file. In my case,
`TODOs.org` had a snippet in the middle of the file where a line started with
-`#!/bin/sh`.
++#!/bin/sh+.
-So what we actually want is to match the **first** line against the pattern. We
+So what we actually want is to match the *first* line against the pattern. We
could loop through each file, get the first line with `head -n 1` and grep
-against that, but this is starting to look messy. I bet there is another way of
+against that, but this is starting to look messy. I bet there is another way of
doing it concisely...
-Let's try Awk. I need a way to select the line numbers to replace `head -n 1`,
-and to stop processing the file if the pattern matches. A quick search points me
-to using `FNR` for the former, and `{ nextline }` for the latter. Let's try it:
+Let's try Awk. I need a way to select the line numbers to replace `head -n 1`,
+and to stop processing the file if the pattern matches. A quick search points
+me to using `FNR` for the former, and `{ nextline }` for the latter. Let's try
+it:
-```shell
+[source,shell]
+----
$ git ls-files | xargs awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }'
bootstrap
build-aux/with-guile-env.in
@@ -129,43 +121,41 @@ scripts/compile-readme.sh
scripts/generate-tasks-and-bugs.sh
scripts/songbooks.in
scripts/with-container.sh
-```
+----
Great! Only `TODOs.org` is missing, but the script is much better: instead of
matching against any part of the file that may have a shebang-like line, we only
-look for the first. Let's put it back into the `assert-shellcheck.sh` file and
+look for the first. Let's put it back into the `assert-shellcheck.sh` file and
use `NULL` for separators to accommodate files with spaces in the name:
-```
+....
#!/usr/sh -eux
git ls-files -z | \
xargs -0 awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }' | \
xargs shellcheck
-```
+....
This is where I've stopped, but I imagine a likely improvement: match against
-only `#!/bin/sh` and `#!/usr/bin/env bash` shebangs (the ones I use most), to
+only +#!/bin/sh+ and +#!/usr/bin/env bash+ shebangs (the ones I use most), to
avoid running ShellCheck on Perl files, or other shebangs.
Also when reviewing the text of this article, I found that `{ nextfile }` is a
-GNU Awk extension. It would be an improvement if `assert-shellcheck.sh` relied
+GNU Awk extension. It would be an improvement if `assert-shellcheck.sh` relied
on the POSIX subset of Awk for working correctly.
-## *Update*
+== _Update_
After publishing, I could remove `{ nextfile }` and even make the script
simpler:
-```shell
+[source,shell]
+----
#!/usr/sh -eux
git ls-files -z | \
xargs -0 awk 'FNR==1 && /^#!\// { print FILENAME }' | \
xargs shellcheck
-```
+----
Now both the shell and Awk usage are POSIX compatible.
-
-[awk-20min]: https://ferd.ca/awk-in-20-minutes.html
-[shellcheck]: https://www.shellcheck.net/
diff --git a/src/content/tils/2020/12/29/svg.adoc b/src/content/tils/2020/12/29/svg.adoc
index 54cca9a..2022c83 100644
--- a/src/content/tils/2020/12/29/svg.adoc
+++ b/src/content/tils/2020/12/29/svg.adoc
@@ -1,32 +1,21 @@
----
+= SVG favicon
-title: SVG favicon
-
-date: 2020-12-29
-
-updated_at: 2021-01-12
-
-layout: post
-
-lang: en
-
-ref: svg-favicon
-
----
+:favicon: link:../../../../img/favicon.svg
I've wanted to change this website's favicon from a plain `.ico` file to a
-proper SVG. The problem I was trying to solve was to reuse the same image on
+proper SVG. The problem I was trying to solve was to reuse the same image on
other places, such as avatars.
Generating a PNG from the existing 16x16 icon was possible but bad: the final
-image was blurry. Converting the `.ico` to an SVG was possible, but sub-optimal:
-tools try to guess some vector paths, and the final SVG didn't match the
-original.
+image was blurry. Converting the `.ico` to an SVG was possible, but
+sub-optimal: tools try to guess some vector paths, and the final SVG didn't
+match the original.
Instead I used a tool to draw the "vector pixels" as black squares, and after
getting the final result I manually cleaned-up the generated XML:
-```xml
+[source,xml]
+----
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16">
<path d="M 0 8 L 1 8 L 1 9 L 0 9 L 0 8 Z" />
@@ -89,27 +78,28 @@ getting the final result I manually cleaned-up the generated XML:
<path d="M 14 11 L 15 11 L 15 12 L 14 12 L 14 11 Z" />
<path d="M 14 12 L 15 12 L 15 13 L 14 13 L 14 12 Z" />
</svg>
-```
+----
-The good thing about this new favicon
-(at [`/static/lord-favicon.svg`](/static/lord-favicon.svg)) is that
-a) it is simple enough that I feel
-comfortable editing it manually and b) it is an SVG, which means I can generate
-any desired size.
+The good thing about this new favicon (at {favicon}[`/static/lord-favicon.svg`])
+is that a) it is simple enough that I feel comfortable editing it manually and
+b) it is an SVG, which means I can generate any desired size.
With the new favicon file, I now had to add to the templates' `<head>` a
`<link>` to this icon:
-```html
+
+[source,html]
+----
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/static/favicon.svg">
...
-```
+----
-Still missing is a bitmap image for places that can't handle vector images. I
+Still missing is a bitmap image for places that can't handle vector images. I
used Jekyll generator to create an PNG from the existing SVG:
-```ruby
+[source,ruby]
+----
module Jekyll
class FaviconGenerator < Generator
safe true
@@ -127,7 +117,7 @@ module Jekyll
end
end
end
-```
+----
I had to increase the priority of the generator so that it would run before
other places that would use a `{% link /static/lord-favicon.png %}`, otherwise
diff --git a/src/content/tils/2021/01/12/curl-awk-emails.adoc b/src/content/tils/2021/01/12/curl-awk-emails.adoc
index 880ddf1..875c655 100644
--- a/src/content/tils/2021/01/12/curl-awk-emails.adoc
+++ b/src/content/tils/2021/01/12/curl-awk-emails.adoc
@@ -1,28 +1,25 @@
----
+= Awk snippet: send email to multiple recipients with cURL
-title: 'Awk snippet: send email to multiple recipients with cURL'
+:neomutt: https://neomutt.org/
+:found-out-article: https://blog.edmdesigner.com/send-email-from-linux-command-line/
+:curl: https://curl.se/
-date: 2021-01-12
-
-layout: post
-
-lang: en
-
-ref: awk-snippet-send-email-to-multiple-recipients-with-curl
-
----
-
-As I experiment with [Neomutt][neomutt], I wanted to keep being able to enqueue emails for sending later like my previous setup, so that I didn't rely on having an internet connection.
+As I experiment with {neomutt}[Neomutt], I wanted to keep being able to enqueue
+emails for sending later like my previous setup, so that I didn't rely on having
+an internet connection.
My requirements for the `sendmail` command were:
-1. store the email in a file, and send it later.
-1. send from different addresses, using different SMTP servers;
-I couldn't find an MTA that could accomplish that, but I was able to quickly write a solution.
+. store the email in a file, and send it later;
+. send from different addresses, using different SMTP servers.
+
+I couldn't find an MTA that could accomplish that, but I was able to quickly
+write a solution.
The first part was the easiest: store the email in a file:
-```shell
+[source,shell]
+----
# ~/.config/mutt/muttrc:
set sendmail=~/bin/enqueue-email.sh
@@ -30,20 +27,24 @@ set sendmail=~/bin/enqueue-email.sh
#!/bin/sh -eu
cat - > "$HOME/mbsync/my-queued-emails/$(date -Is)"
-```
+----
-Now that I had the email file store locally, I needed a program to send the email from the file, so that I could create a cronjob like:
+Now that I had the email file store locally, I needed a program to send the
+email from the file, so that I could create a cronjob like:
-```shell
+[source,shell]
+----
for f in ~/mbsync/my-queued-emails/*; do
~/bin/dispatch-email.sh "$f" && rm "$f"
done
-```
+----
-The `dispatch-email.sh` would have to look at the `From: ` header and decide which SMTP server to use.
-As I [found out][curl-email] that [curl][curl] supports SMTP and is able to send emails, this is what I ended up with:
+The `dispatch-email.sh` would have to look at the `From:` header and decide
+which SMTP server to use. As I {found-out-article}[found out] that {curl}[curl]
+supports SMTP and is able to send emails, this is what I ended up with:
-```shell
+[source,shell]
+----
#!/bin/sh -eu
F="$1"
@@ -79,24 +80,30 @@ else
echo 'Bad "From: " address'
exit 1
fi
-```
+----
Most of curl flags used are self-explanatory, except for `$rcpt`.
-curl connects to the SMTP server, but doesn't set the recipient address by looking at the message.
-My solution was to generate the curl flags, store them in `$rcpt` and use it unquoted to leverage shell word splitting.
+curl connects to the SMTP server, but doesn't set the recipient address by
+looking at the message. My solution was to generate the curl flags, store them
+in `$rcpt` and use it unquoted to leverage shell word splitting.
-To me, the most interesting part was building the `$rcpt` flags.
-My first instinct was to try grep, but it couldn't print only matches in a regex.
-As I started to turn towards sed, I envisioned needing something else to loop over the sed output, and I then moved to Awk.
+To me, the most interesting part was building the `$rcpt` flags. My first
+instinct was to try grep, but it couldn't print only matches in a regex. As I
+started to turn towards sed, I envisioned needing something else to loop over
+the sed output, and I then moved to Awk.
-In the short Awk snippet, 3 things were new to me: the `match(...)`, `split(...)` and `for () {}`.
-The only other function I have ever used was `gsub(...)`, but these new ones felt similar enough that I could almost guess their behaviour and arguments.
-`match(...)` stores the matches of a regex on the given array positionally, and `split(...)` stores the chunks in the given array.
+In the short Awk snippet, 3 things were new to me: the `match(...)`,
+`split(...)` and `for () {}`. The only other function I have ever used was
+`gsub(...)`, but these new ones felt similar enough that I could almost guess
+their behaviour and arguments. `match(...)` stores the matches of a regex on
+the given array positionally, and `split(...)` stores the chunks in the given
+array.
I even did it incrementally:
-```shell
+[source,shell]
+----
$ H='To: to@example.com, to2@example.com\nCc: cc@example.com, cc2@example.com\nBcc: bcc@example.com,bcc2@example.com\n'
$ printf "$H" | awk '/^To: .*$/ { print $0 }'
To: to@example.com, to2@example.com
@@ -115,28 +122,27 @@ to@example.com,
$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[2] }'
to2@example.com
$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[3] }'
+----
-```
+(This isn't the verbatim interactive session, but a cleaned version to make it
+more readable.)
-(This isn't the verbatim interactive session, but a cleaned version to make it more readable.)
+At this point, I realized I needed a for loop over the `tos` array, and I moved
+the Awk snippet into the `~/bin/dispatch-email.sh`. I liked the final thing:
-At this point, I realized I needed a for loop over the `tos` array, and I moved the Awk snippet into the `~/bin/dispatch-email.sh`.
-I liked the final thing:
-
-```awk
+[source,awk]
+----
match($0, /^(To|Cc|Bcc): (.*)$/, m) {
split(m[2], tos, ",")
for (i in tos) {
print "--mail-rcpt " tos[i]
}
}
-```
+----
-As I learn more about Awk, I feel that it is too undervalued, as many people turn to Perl or other programming languages when Awk suffices.
-The advantage is pretty clear: writing programs that run on any POSIX system, without extra dependencies required.
+As I learn more about Awk, I feel that it is too undervalued, as many people
+turn to Perl or other programming languages when Awk suffices. The advantage is
+pretty clear: writing programs that run on any POSIX system, without extra
+dependencies required.
Coding to the standards is underrated.
-
-[neomutt]: https://neomutt.org/
-[curl-email]: https://blog.edmdesigner.com/send-email-from-linux-command-line/
-[curl]: https://curl.se/
diff --git a/src/content/tils/2021/01/17/posix-shebang.adoc b/src/content/tils/2021/01/17/posix-shebang.adoc
index 5f5b897..4e2fbe8 100644
--- a/src/content/tils/2021/01/17/posix-shebang.adoc
+++ b/src/content/tils/2021/01/17/posix-shebang.adoc
@@ -1,55 +1,58 @@
= POSIX sh and shebangs
-date: 2021-01-17
+:awk-1: link:../../../2020/12/15/shellcheck-repo.html
+:awk-2: link:../12/curl-awk-emails.html
-layout: post
+As I {awk-1}[keep moving] {awk-2}[towards POSIX], I'm on the process of
+migrating all my Bash scripts to POSIX sh.
-lang: en
-
-ref: posix-sh-and-shebangs
-
----
-
-As I [keep moving][posix-awk-0] [towards POSIX][posix-awk-1], I'm on the process of migrating all my Bash scripts to POSIX sh.
-
-As I dropped `[[`, arrays and other Bashisms, I was left staring at the first line of every script, wondering what to do: what is the POSIX sh equivalent of `#!/usr/bin/env bash`?
-I already knew that POSIX says nothing about shebangs, and that the portable way to call a POSIX sh script is `sh script.sh`, but I didn't know what to do with that first line.
+As I dropped `[[`, arrays and other Bashisms, I was left staring at the first
+line of every script, wondering what to do: what is the POSIX sh equivalent of
+`#!/usr/bin/env bash`? I already knew that POSIX says nothing about shebangs,
+and that the portable way to call a POSIX sh script is `sh script.sh`, but
+I didn't know what to do with that first line.
What I had previously was:
-```shell
+
+[source,shell]
+----
#!/usr/bin/env bash
set -Eeuo pipefail
cd "$(dirname "${BASH_SOURCE[0]}")"
-```
+----
-Obviously, the `$BASH_SOURCE` would be gone, and I would have to adapt some of my scripts to not rely on the script location.
-The `-E` and `-o pipefail` options were also gone, and would be replaced by nothing.
+Obviously, the `$BASH_SOURCE` would be gone, and I would have to adapt some of
+my scripts to not rely on the script location. The `-E` and `-o pipefail`
+options were also gone, and would be replaced by nothing.
I converted all of them to:
-```shell
+
+[source,shell]
+----
#!/bin/sh -eu
-```
+----
I moved the `-eu` options to the shebang line itself, striving for conciseness.
-But as I changed callers from `./script.sh` to `sh script.sh`, things started to fail.
-Some tests that should fail reported errors, but didn't return 1.
+But as I changed callers from `./script.sh` to `sh script.sh`, things started to
+fail. Some tests that should fail reported errors, but didn't return 1.
-My first reaction was to revert back to `./script.sh`, but the POSIX bug I caught is a strong strain, and when I went back to it, I figured that the callers were missing some flags.
-Specifically, `sh -eu script.sh`.
+My first reaction was to revert back to `./script.sh`, but the POSIX bug I
+caught is a strong strain, and when I went back to it, I figured that the
+callers were missing some flags. Specifically, `sh -eu script.sh`.
-Then it clicked: when running with `sh script.sh`, the shebang line with the sh options is ignored, as it is a comment!
+Then it clicked: when running with `sh script.sh`, the shebang line with the sh
+options is ignored, as it is a comment!
Which means that the shebang most friendly with POSIX is:
-```shell
+[source,shell]
+----
#!/bin/sh
set -eu
-```
+----
-1. when running via `./script.sh`, if the system has an executable at `/bin/sh`, it will be used to run the script;
-2. when running via `sh script.sh`, the sh options aren't ignored as previously.
+. when running via `./script.sh`, if the system has an executable at `/bin/sh`,
+ it will be used to run the script;
+. when running via `sh script.sh`, the sh options aren't ignored as previously.
TIL.
-
-[posix-awk-0]: {% link _tils/2020-12-15-awk-snippet-shellcheck-all-scripts-in-a-repository.md %}
-[posix-awk-1]: {% link _tils/2021-01-12-awk-snippet-send-email-to-multiple-recipients-with-curl.md %}
diff --git a/src/content/tils/2021/04/24/cl-generic-precedence.adoc b/src/content/tils/2021/04/24/cl-generic-precedence.adoc
index 8051232..541afb0 100644
--- a/src/content/tils/2021/04/24/cl-generic-precedence.adoc
+++ b/src/content/tils/2021/04/24/cl-generic-precedence.adoc
@@ -1,20 +1,10 @@
----
+= Common Lisp argument precedence order parameterization of a generic function
-title: Common Lisp argument precedence order parameterization of a generic function
+When CLOS dispatches a method, it picks the most specific method definition to
+the argument list:
-date: 2021-04-24 2
-
-layout: post
-
-lang: en
-
-ref: common-lisp-argument-precedence-order-parameterization-of-a-generic-function
-
----
-
-When CLOS dispatches a method, it picks the most specific method definition to the argument list:
-
-```lisp
+[source,lisp]
+----
* (defgeneric a-fn (x))
#<STANDARD-GENERIC-FUNCTION A-FN (0) {5815ACB9}>
@@ -39,11 +29,13 @@ When CLOS dispatches a method, it picks the most specific method definition to t
* (a-fn 1)
:NUMBER-1
-```
+----
-CLOS uses a similar logic when choosing the method from parent classes, when multiple ones are available:
+CLOS uses a similar logic when choosing the method from parent classes, when
+multiple ones are available:
-```lisp
+[source,lisp]
+----
* (defclass class-a () ())
#<STANDARD-CLASS CLASS-A {583E0B25}>
@@ -63,11 +55,13 @@ CLOS uses a similar logic when choosing the method from parent classes, when mul
; Compiling Top-Level Form:
#<STANDARD-METHOD ANOTHER-FN (CLASS-B) {584B8895}>
-```
+----
-Given the above definitions, when inheriting from `class-a` and `class-b`, the order of inheritance matters:
+Given the above definitions, when inheriting from `class-a` and `class-b`, the
+order of inheritance matters:
-```lisp
+[source,lisp]
+----
* (defclass class-a-coming-first (class-a class-b) ())
#<STANDARD-CLASS CLASS-A-COMING-FIRST {584BE6AD}>
@@ -79,11 +73,14 @@ Given the above definitions, when inheriting from `class-a` and `class-b`, the o
* (another-fn (make-instance 'class-b-coming-first))
:CLASS-B
-```
+----
-Combining the order of inheritance with generic functions with multiple arguments, CLOS has to make a choice of how to pick a method given two competing definitions, and its default strategy is prioritizing from left to right:
+Combining the order of inheritance with generic functions with multiple
+arguments, CLOS has to make a choice of how to pick a method given two competing
+definitions, and its default strategy is prioritizing from left to right:
-```lisp
+[source,lisp]
+----
* (defgeneric yet-another-fn (obj1 obj2))
#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (0) {584D9EC9}>
@@ -95,43 +92,58 @@ Combining the order of inheritance with generic functions with multiple argument
* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
:FIRST-ARG-SPECIALIZED
-```
+----
-CLOS has to make a choice between the first and the second definition of `yet-another-fn`, but its choice is just a heuristic.
-What if we want the choice to be based on the second argument, instead of the first?
+CLOS has to make a choice between the first and the second definition of
+`yet-another-fn`, but its choice is just a heuristic. What if we want the
+choice to be based on the second argument, instead of the first?
-For that, we use the `:argument-precedence-order` option when declaring a generic function:
+For that, we use the `:argument-precedence-order` option when declaring a
+generic function:
-```lisp
+[source,lisp]
+----
* (defgeneric yet-another-fn (obj1 obj2) (:argument-precedence-order obj2 obj1))
#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (2) {584D9EC9}>
* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
:SECOND-ARG-SPECIALIZED
-```
+----
-I liked that the `:argument-precedence-order` option exists.
-We shouldn't have to change the arguments from `(obj1 obj2)` to `(obj2 obj1)` just to make CLOS pick the method that we want.
-We can configure its default behaviour if desired, and keep the order of arguments however it best fits the generic function.
+I liked that the `:argument-precedence-order` option exists. We shouldn't have
+to change the arguments from `(obj1 obj2)` to `(obj2 obj1)` just to make CLOS
+pick the method that we want. We can configure its default behaviour if
+desired, and keep the order of arguments however it best fits the generic
+function.
-## Comparison with Clojure
+== Comparison with Clojure
Clojure has an equivalent, when using `defmulti`.
-Since when declaring a multi-method with `defmulti` we must define the dispatch function, Clojure uses it to pick the method definition.
-Since the dispatch function is required, there is no need for a default behaviour, such as left-to-right.
+Since when declaring a multi-method with `defmulti` we must define the dispatch
+function, Clojure uses it to pick the method definition. Since the dispatch
+function is required, there is no need for a default behaviour, such as
+left-to-right.
-## Conclusion
+== Conclusion
-Making the argument precedence order configurable for generic functions but not for class definitions makes a lot of sense.
+Making the argument precedence order configurable for generic functions but not
+for class definitions makes a lot of sense.
-When declaring a class, we can choose the precedence order, and that is about it.
-But when defining a generic function, the order of arguments is more important to the function semantics, and the argument precedence being left-to-right is just the default behaviour.
+When declaring a class, we can choose the precedence order, and that is about
+it. But when defining a generic function, the order of arguments is more
+important to the function semantics, and the argument precedence being
+left-to-right is just the default behaviour.
-One shouldn't change the order of arguments of a generic function for the sake of tailoring it to the CLOS priority ranking algorithm, but doing it for a class definition is just fine.
+One shouldn't change the order of arguments of a generic function for the sake
+of tailoring it to the CLOS priority ranking algorithm, but doing it for a class
+definition is just fine.
TIL.
-## References
+== References
+
+:clos-wiki: https://en.wikipedia.org/wiki/Object-Oriented_Programming_in_Common_Lisp
-1. [Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS](https://en.wikipedia.org/wiki/Object-Oriented_Programming_in_Common_Lisp), by Sonja E. Keene
+. {clos-wiki}[Object-Oriented Programming in Common Lisp: A Programmer's Guide
+ to CLOS], by Sonja E. Keene
diff --git a/src/content/tils/2021/04/24/clojure-autocurry.adoc b/src/content/tils/2021/04/24/clojure-autocurry.adoc
index c1e277f..d7dd557 100644
--- a/src/content/tils/2021/04/24/clojure-autocurry.adoc
+++ b/src/content/tils/2021/04/24/clojure-autocurry.adoc
@@ -1,22 +1,13 @@
----
+= Clojure auto curry
-title: Clojure auto curry
+:defcurry-orig: https://lorettahe.github.io/clojure/2016/09/22/clojure-auto-curry
-date: 2021-04-24 1
+Here's a simple macro defined by {defcurry-orig}[Loretta He] to create Clojure
+functions that are curried on all arguments, relying on Clojure's multi-arity
+support:
-updated_at: 2021-04-27
-
-layout: post
-
-lang: en
-
-ref: clojure-auto-curry
-
----
-
-Here's a simple macro defined by [Loretta He][lorettahe] to create Clojure functions that are curried on all arguments, relying on Clojure's multi-arity support:
-
-```clojure
+[source,clojure]
+----
(defmacro defcurry
[name args & body]
(let [partials (map (fn [n]
@@ -25,11 +16,12 @@ Here's a simple macro defined by [Loretta He][lorettahe] to create Clojure funct
`(defn ~name
(~args ~@body)
~@partials)))
-```
+----
A naive `add` definition, alongside its usage and macroexpansion:
-```clojure
+[source,clojure]
+----
user=> (defcurry add
[a b c d e]
(+ 1 2 3 4 5))
@@ -64,22 +56,24 @@ user=> (pprint
([a b c] (clojure.core/partial add a b c))
([a b c d] (clojure.core/partial add a b c d))))
nil
-```
+----
-This simplistic `defcurry` definition doesn't support optional parameters, multi-arity, `&` rest arguments, docstrings, etc., but it could certainly evolve to do so.
+This simplistic `defcurry` definition doesn't support optional parameters,
+multi-arity, `&` rest arguments, docstrings, etc., but it could certainly evolve
+to do so.
-I like how `defcurry` is so short, and abdicates the responsability of doing the multi-arity logic to Clojure's built-in multi-arity support.
-Simple and elegant.
+I like how `defcurry` is so short, and abdicates the responsability of doing the
+multi-arity logic to Clojure's built-in multi-arity support. Simple and
+elegant.
Same Clojure as before, now with auto-currying via macros.
-[lorettahe]: http://lorettahe.github.io/clojure/2016/09/22/clojure-auto-curry
-
-## Comparison with Common Lisp
+== Comparison with Common Lisp
My attempt at writing an equivalent for Common Lisp gives me:
-```lisp
+[source,lisp]
+----
(defun partial (fn &rest args)
(lambda (&rest args2)
(apply fn (append args args2))))
@@ -96,15 +90,17 @@ My attempt at writing an equivalent for Common Lisp gives me:
(let ((func (lambda ,args ,@body)))
(curry-n (- ,(length args) (length rest))
(apply #'partial func rest)))))
-```
+----
-Without built-in multi-arity support, we have to do more work, like tracking the number of arguments consumed so far.
-We also have to write `#'partial` ourselves.
-That is, without dependending on any library, sticking to ANSI Common Lisp.
+Without built-in multi-arity support, we have to do more work, like tracking the
+number of arguments consumed so far. We also have to write `#'partial`
+ourselves. That is, without dependending on any library, sticking to ANSI
+Common Lisp.
The usage is pretty similar:
-```lisp
+[source,lisp]
+----
* (defcurry add (a b c d e)
(+ a b c d e))
ADD
@@ -128,8 +124,10 @@ ADD
(LET ((FUNC (LAMBDA (A B C D E) (+ A B C D E))))
(CURRY-N (- 5 (LENGTH REST)) (APPLY #'PARTIAL FUNC REST))))
T
-```
+----
-This also require `funcall`s, since we return a `lambda` that doesn't live in the function namespace.
+This also require `funcall`s, since we return a `lambda` that doesn't live in
+the function namespace.
-Like the Clojure one, it doesn't support optional parameters, `&rest` rest arguments, docstrings, etc., but it also could evolve to do so.
+Like the Clojure one, it doesn't support optional parameters, `&rest` rest
+arguments, docstrings, etc., but it also could evolve to do so.
diff --git a/src/content/tils/2021/04/24/scm-nif.adoc b/src/content/tils/2021/04/24/scm-nif.adoc
index f53451b..f70e533 100644
--- a/src/content/tils/2021/04/24/scm-nif.adoc
+++ b/src/content/tils/2021/04/24/scm-nif.adoc
@@ -1,35 +1,28 @@
----
+= Three-way conditional for number signs on Lisp
-title: Three-way conditional for number signs on Lisp
+:on-lisp: https://www.paulgraham.com/onlisptext.html
+:sicp: https://mitpress.mit.edu/sites/default/files/sicp/index.html
-date: 2021-04-24 3
+A useful macro from Paul Graham's {on-lisp}[On Lisp] book:
-updated_at: 2021-08-14
-
-layout: post
-
-lang: en
-
-ref: three-way-conditional-for-number-signs-on-lisp
-
----
-
-A useful macro from Paul Graham's [On Lisp][on-lisp] book:
-
-```lisp
+[source,lisp]
+----
(defmacro nif (expr pos zero neg)
(let ((g (gensym)))
`(let ((,g ,expr))
(cond ((plusp ,g) ,pos)
((zerop ,g) ,zero)
(t ,neg)))))
-```
+----
-After I looked at this macro, I started seeing opportunities to using it in many places, and yet I didn't see anyone else using it.
+After I looked at this macro, I started seeing opportunities to using it in many
+places, and yet I didn't see anyone else using it.
-The latest example I can think of is section 1.3.3 of [Structure and Interpretation of Computer Programs][sicp], which I was reading recently:
+The latest example I can think of is section 1.3.3 of {sicp}[Structure and
+Interpretation of Computer Programs], which I was reading recently:
-```scheme
+[source,scheme]
+----
(define (search f neg-point pos-point)
(let ((midpoint (average neg-point pos-point)))
(if (close-enough? neg-point post-point)
@@ -40,11 +33,14 @@ The latest example I can think of is section 1.3.3 of [Structure and Interpretat
((negative? test-value)
(search f midpoint pos-point))
(else midpoint))))))
-```
+----
-Not that the book should introduce such macro this early, but I couldn't avoid feeling bothered by not using the `nif` macro, which could even remove the need for the intermediate `test-value` variable:
+Not that the book should introduce such macro this early, but I couldn't avoid
+feeling bothered by not using the `nif` macro, which could even remove the need
+for the intermediate `test-value` variable:
-```scheme
+[source,scheme]
+----
(define (search f neg-point pos-point)
(let ((midpoint (average neg-point pos-point)))
(if (close-enough? neg-point post-point)
@@ -53,11 +49,10 @@ Not that the book should introduce such macro this early, but I couldn't avoid f
(search f neg-point midpoint)
(midpoint)
(search f midpoint pos-point)))))
-```
-
-It also avoids `cond`'s extra clunky parentheses for grouping, which is unnecessary but built-in.
+----
-As a macro, I personally feel it tilts the balance towards expressivenes despite its extra cognitive load toll.
+It also avoids `cond`'s extra clunky parentheses for grouping, which is
+unnecessary but built-in.
-[on-lisp]: http://www.paulgraham.com/onlisptext.html
-[sicp]: https://mitpress.mit.edu/sites/default/files/sicp/index.html
+As a macro, I personally feel it tilts the balance towards expressivenes despite
+its extra cognitive load toll.
diff --git a/src/content/tils/2021/07/23/git-tls-gpg.adoc b/src/content/tils/2021/07/23/git-tls-gpg.adoc
index fd42c1c..8fe86c6 100644
--- a/src/content/tils/2021/07/23/git-tls-gpg.adoc
+++ b/src/content/tils/2021/07/23/git-tls-gpg.adoc
@@ -1,27 +1,21 @@
----
+= GPG verification of Git repositories without TLS
-title: GPG verification of Git repositories without TLS
-
-date: 2021-07-23
-
-layout: post
-
-lang: en
-
-ref: gpg-verification-of-git-repositories-without-tls
-
----
+:empty:
+:git-protocol: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols#_the_git_protocol
+:remembering: https://euandreh.xyz/remembering/
For online Git repositories that use the [Git Protocol] for serving code, you
can can use GPG to handle authentication, if you have the committer's public
key.
Here's how I'd verify that I've cloned an authentic version of
-[remembering][remembering][^not-available]:
+{remembering}[remembering]footnote:not-available[
+ Funnily enough, not available anymore via the Git Protocol, now only with
+ HTTPS.
+]:
-[^not-available]: Funnily enough, not available anymore via the Git Protocol, now only with HTTPS.
-
-```shell
+[source,shell]
+----
$ wget -qO- https://euandre.org/public.asc | gpg --import -
gpg: clef 81F90EC3CD356060 : « EuAndreh <eu@euandre.org> » n'est pas modifiée
gpg: Quantité totale traitée : 1
@@ -32,7 +26,7 @@ $ git verify-commit HEAD
gpg: Signature faite le dim. 27 juin 2021 16:50:21 -03
gpg: avec la clef RSA 5BDAE9B8B2F6C6BCBB0D6CE581F90EC3CD356060
gpg: Bonne signature de « EuAndreh <eu@euandre.org> » [ultime]
-```
+----
On the first line we import the public key (funnily enough, available via
HTTPS), and after cloning the code via the insecure `git://` protocol, we use
@@ -40,17 +34,12 @@ HTTPS), and after cloning the code via the insecure `git://` protocol, we use
The verification is successful, and we can see that the public key from the
signature matches the fingerprint of the imported one. However
-`git verify-commit` doesn't have an option to check which public key you want
-to verify the commit against. Which means that if a MITM attack happens, the
-attacker could very easily serve a malicious repository with signed commits,
-and you'd have to verify the public key by yourself. That would need to happen
-for subsequent fetches, too.
+`git verify-commit` doesn't have an option to check which public key you want to
+verify the commit against. Which means that if a MITM attack happens, the
+attacker could very easily serve a malicious repository with signed commits, and
+you'd have to verify the public key by yourself. That would need to happen for
+subsequent fetches, too.
Even though this is possible, it is not very convenient, and certainly very
brittle. Despite the fact that the Git Protocol is much faster, it being harder
to make secure is a big downside.
-
-
-
-[Git Protocol]: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols#_the_git_protocol
-[remembering]: https://euandreh.xyz/remembering/
diff --git a/src/content/tils/2021/08/11/js-bigint-reviver.adoc b/src/content/tils/2021/08/11/js-bigint-reviver.adoc
index d71174d..657248a 100644
--- a/src/content/tils/2021/08/11/js-bigint-reviver.adoc
+++ b/src/content/tils/2021/08/11/js-bigint-reviver.adoc
@@ -1,39 +1,26 @@
----
+= Encoding and decoding JavaScript BigInt values with reviver
-title: Encoding and decoding JavaScript BigInt values with reviver
+:reviver-fn: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#using_the_reviver_parameter
+:bigint: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
+:json-rfc: https://datatracker.ietf.org/doc/html/rfc8259
-date: 2021-08-11
-
-updated_at: 2021-08-13
-
-layout: post
-
-lang: en
-
-ref: encoding-and-decoding-javascript-bigint-values-with-reviver
-
----
-
-`JSON.parse()` accepts a second parameter: a [`reviver()` function][reviver].
+`JSON.parse()` accepts a second parameter: a {reviver-fn}[`reviver()` function].
It is a function that can be used to transform the `JSON` values as they're
being parsed.
-[reviver]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#using_the_reviver_parameter
-
-As it turns out, when combined with JavaScript's [`BigInt`] type, you can parse
-and encode JavaScript `BigInt` numbers via JSON:
+As it turns out, when combined with JavaScript's {bigint}[`BigInt`] type, you
+can parse and encode JavaScript `BigInt` numbers via JSON:
-[`BigInt`]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
-
-```javascript
+[source,javascript]
+----
const bigIntReviver = (_, value) =>
- typeof value === "string" && value.match(/^-?[0-9]+n$/)
- ? BigInt(value.slice(0, value.length - 1))
- : value;
-```
+ typeof value === "string" && value.match(/^-?[0-9]+n$/)
+ ? BigInt(value.slice(0, value.length - 1))
+ : value;
+----
-I chose to interpret strings that contains only numbers and an ending `n` suffix
-as `BigInt` values, similar to how JavaScript interprets `123` (a number)
+I chose to interpret strings that contains only numbers and an ending `n`
+suffix as `BigInt` values, similar to how JavaScript interprets `123` (a number)
differently from `123n` (a `bigint`);
We do those checks before constructing the `BigInt` to avoid throwing needless
@@ -42,25 +29,27 @@ become a bottleneck when parsing large JSON values.
In order to do the full roundtrip, we now only need the `toJSON()` counterpart:
-```javascript
+[source,javascript]
+----
BigInt.prototype.toJSON = function() {
- return this.toString() + "n";
+ return this.toString() + "n";
};
-```
+----
With both `bigIntReviver` and `toJSON` defined, we can now successfully parse
and encode JavaScript objects with `BigInt` values transparently:
-```javascript
+[source,javascript]
+----
const s = `[
- null,
- true,
- false,
- -1,
- 3.14,
- "a string",
- { "a-number": "-123" },
- { "a-bigint": "-123n" }
+ null,
+ true,
+ false,
+ -1,
+ 3.14,
+ "a string",
+ { "a-number": "-123" },
+ { "a-bigint": "-123n" }
]`;
const parsed = JSON.parse(s, bigIntReviver);
@@ -71,11 +60,11 @@ console.log(s2);
console.log(typeof parsed[6]["a-number"])
console.log(typeof parsed[7]["a-bigint"])
-```
+----
The output of the above is:
-```
+....
[
null,
true,
@@ -89,12 +78,10 @@ The output of the above is:
[null,true,false,-1,3.14,"a string",{"a-number":"-123"},{"a-bigint":"-123n"}]
string
bigint
-```
+....
If you're on a web browser, you can probably try copying and pasting the above
code on the console right now, as is.
-Even though [`JSON`] doesn't include `BigInt` number, encoding and decoding them
-as strings is quite trivial on JavaScript.
-
-[`JSON`]: https://datatracker.ietf.org/doc/html/rfc8259
+Even though {json-rfc}[`JSON`] doesn't include `BigInt` number, encoding and
+decoding them as strings is quite trivial on JavaScript.