summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2025-03-31 21:51:40 -0300
committerEuAndreh <eu@euandre.org>2025-03-31 21:51:40 -0300
commit570ec471d1605318aeefb030cd78682ae442235b (patch)
tree51e17eabe37c6689f8799b55e6875c3480329a2c
parentMakefile, mkdeps.sh: Derive index.html and feed.xml from more static "sortdat... (diff)
downloadeuandre.org-570ec471d1605318aeefb030cd78682ae442235b.tar.gz
euandre.org-570ec471d1605318aeefb030cd78682ae442235b.tar.xz
src/content/: Update all files left to asciidoc
-rw-r--r--src/content/blog/2020/08/31/database-i-with-i-had.adoc4
-rw-r--r--src/content/blog/2020/10/19/feature-flags.adoc315
-rw-r--r--src/content/blog/2020/10/20/wrong-interviewing.adoc302
-rw-r--r--src/content/blog/2020/11/07/diy-bugs.adoc122
-rw-r--r--src/content/blog/2020/11/08/paradigm-shift-review.adoc163
-rw-r--r--src/content/blog/2020/11/12/database-parsers-trees.adoc227
-rw-r--r--src/content/blog/2020/11/14/local-first-review.adoc218
-rw-r--r--src/content/blog/2021/01/26/remembering-ann.adoc185
-rw-r--r--src/content/blog/2021/02/17/fallible.adoc216
-rw-r--r--src/content/blog/2021/04/29/relational-review.adoc126
-rw-r--r--src/content/pastebins/2016/04/05/rpn.adoc19
-rw-r--r--src/content/pastebins/2018/07/11/nix-pinning.adoc21
-rw-r--r--src/content/pastebins/2018/07/13/guix-nixos-systemd.adoc21
-rw-r--r--src/content/pastebins/2018/07/13/guixbuilder-nixos.adoc21
-rw-r--r--src/content/pastebins/2018/07/13/guixbuilder.adoc21
-rw-r--r--src/content/pastebins/2018/07/13/nix-strpad.adoc21
-rw-r--r--src/content/pastebins/2018/07/25/nix-exps.adoc21
-rw-r--r--src/content/pastebins/2018/07/25/nix-showdrv.adoc21
-rw-r--r--src/content/pastebins/2019/06/08/inconsistent-hash.adoc55
-rw-r--r--src/content/pastebins/2019/12/29/raku-tuple-type.adoc24
-rw-r--r--src/content/pastebins/2020/01/04/guix-import-failure.adoc21
-rw-r--r--src/content/pastebins/2020/02/14/guix-shebang.adoc21
-rw-r--r--src/content/pastebins/2020/11/27/guix-build-local.adoc38
-rw-r--r--src/content/pastebins/2020/12/15/guix-pack-fail.adoc33
-rw-r--r--src/content/pastebins/2021/04/03/naive-slugify-js.adoc21
-rw-r--r--src/content/pastebins/2021/06/08/reading-session-pt1.adoc19
-rw-r--r--src/content/pastebins/2021/06/22/curl-wget.adoc24
-rw-r--r--src/content/pastebins/2021/08/11/h1-spacing.adoc62
-rw-r--r--src/content/pastebins/2021/09/02/sicp-3-19.adoc24
-rw-r--r--src/content/pastebins/2021/09/03/sicp-persistent-queue.adoc37
-rw-r--r--src/content/pastebins/2022/07/14/git-cleanup.adoc19
-rw-r--r--src/content/pastebins/2023/07/22/funcallable-amop.adoc20
-rw-r--r--src/content/podcasts/2020/12/19/test-entry.adoc10
-rw-r--r--src/content/screencasts/2021/02/07/autoqemu.adoc51
-rw-r--r--src/content/tils/2020/08/12/filename-timestamp.adoc38
-rw-r--r--src/content/tils/2020/08/13/code-jekyll.adoc102
-rw-r--r--src/content/tils/2020/08/14/browse-git.adoc67
-rw-r--r--src/content/tils/2020/08/16/git-search.adoc51
-rw-r--r--src/content/tils/2020/08/28/grep-online.adoc83
-rw-r--r--src/content/tils/2020/09/04/email-cli-fun-profit.adoc51
-rw-r--r--src/content/tils/2020/09/05/oldschool-pr.adoc77
-rw-r--r--src/content/tils/2020/10/11/search-git-history.adoc47
-rw-r--r--src/content/tils/2020/11/08/find-broken-symlink.adoc36
-rw-r--r--src/content/tils/2020/11/12/diy-nix-bash-ci.adoc45
-rw-r--r--src/content/tils/2020/11/12/git-bisect-automation.adoc23
-rw-r--r--src/content/tils/2020/11/12/useful-bashvars.adoc56
-rw-r--r--src/content/tils/2020/11/14/gpodder-media.adoc30
-rw-r--r--src/content/tils/2020/11/30/git-notes-ci.adoc63
-rw-r--r--src/content/tils/2020/12/15/shellcheck-repo.adoc90
-rw-r--r--src/content/tils/2020/12/29/svg.adoc50
-rw-r--r--src/content/tils/2021/01/12/curl-awk-emails.adoc100
-rw-r--r--src/content/tils/2021/01/17/posix-shebang.adoc63
-rw-r--r--src/content/tils/2021/04/24/cl-generic-precedence.adoc98
-rw-r--r--src/content/tils/2021/04/24/clojure-autocurry.adoc66
-rw-r--r--src/content/tils/2021/04/24/scm-nif.adoc53
-rw-r--r--src/content/tils/2021/07/23/git-tls-gpg.adoc43
-rw-r--r--src/content/tils/2021/08/11/js-bigint-reviver.adoc81
57 files changed, 1780 insertions, 2156 deletions
diff --git a/src/content/blog/2020/08/31/database-i-with-i-had.adoc b/src/content/blog/2020/08/31/database-i-with-i-had.adoc
index fdcb56c..7533c8a 100644
--- a/src/content/blog/2020/08/31/database-i-with-i-had.adoc
+++ b/src/content/blog/2020/08/31/database-i-with-i-had.adoc
@@ -62,8 +62,8 @@ the source of truth, and allow the application to work on top of it.
{sqlite}[*SQLite*] is a great example of that: it is a very powerful relational
database that runs {sqlite-whentouse}[almost anywhere]. What I miss from it
that SQLite doesn't provide is the ability to run it on the browser: even though
-you could compile it to WebAssembly, [line-through]*it assumes a POSIX
-filesystem that would have to be emulated*[multiblock footnote omitted].
+you could compile it to WebAssembly, [line-through]#it assumes a POSIX
+filesystem that would have to be emulated#[multiblock footnote omitted FIXME].
{pouchdb}[*PouchDB*] is another great example: it's a full reimplementation of
{couchdb}[CouchDB] that targets JavaScript environments, mainly the browser and
diff --git a/src/content/blog/2020/10/19/feature-flags.adoc b/src/content/blog/2020/10/19/feature-flags.adoc
index c62c2d1..c9adc8a 100644
--- a/src/content/blog/2020/10/19/feature-flags.adoc
+++ b/src/content/blog/2020/10/19/feature-flags.adoc
@@ -1,305 +1,304 @@
----
-title: "Feature flags: differences between backend, frontend and mobile"
-date: 2020-10-19
-updated_at: 2020-11-03
-layout: post
-lang: en
-ref: feature-flags-differences-between-backend-frontend-and-mobile
-eu_categories: presentation
----
-
-*This article is derived from a [presentation][presentation] on the same
-subject.*
-
-When discussing about feature flags, I find that their
-costs and benefits are often well exposed and addressed. Online articles like
-"[Feature Toggle (aka Feature Flags)][feature-flags-article]" do a great job of
+= Feature flags: differences between backend, frontend and mobile
+
+:empty:
+:slides: link:../../../../slides/2020/10/19/feature-flags.html
+:fowler-article: https://martinfowler.com/articles/feature-toggles.html
+
+_This article is derived from a {slides}[presentation] on the same subject._
+
+When discussing about feature flags, I find that their costs and benefits are
+often well exposed and addressed. Online articles like
+"{fowler-article}[Feature Toggle (aka Feature Flags)]" do a great job of
explaining them in detail, giving great general guidance of how to apply
techniques to adopt it.
However the weight of those costs and benefits apply differently on backend,
-frontend or mobile, and those differences aren't covered. In fact, many of them
+frontend or mobile, and those differences aren't covered. In fact, many of them
stop making sense, or the decision of adopting a feature flag or not may change
depending on the environment.
In this article I try to make the distinction between environments and how
- feature flags apply to them, with some final best practices I've acquired when
- using them in production.
+feature flags apply to them, with some final best practices I've acquired when
+using them in production.
-[presentation]: {% link _slides/2020-10-19-rollout-feature-flag-experiment-operational-toggle.slides %}
-[feature-flags-article]: https://martinfowler.com/articles/feature-toggles.html
+== Why feature flags
-## Why feature flags
+:atlassian-cicd: https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment
Feature flags in general tend to be cited on the context of
-[continuous deployment][cd]:
+{atlassian-cicd}[continuous deployment]:
-> A: With continuous deployment, you deploy to production automatically
+____
+A: With continuous deployment, you deploy to production automatically
-> B: But how do I handle deployment failures, partial features, *etc.*?
+B: But how do I handle deployment failures, partial features, _etc._?
-> A: With techniques like canary, monitoring and alarms, feature flags, *etc.*
+A: With techniques like canary, monitoring and alarms, feature flags, _etc._
+____
-Though adopting continuous deployment doesn't force you to use feature
-flags, it creates a demand for it. The inverse is also true: using feature flags
-on the code points you more obviously to continuous deployment. Take the
-following code sample for example, that we will reference later on the article:
+Though adopting continuous deployment doesn't force you to use feature flags, it
+creates a demand for it. The inverse is also true: using feature flags on the
+code points you more obviously to continuous deployment. Take the following
+code sample for example, that we will reference later on the article:
-```javascript
+[source,javascript]
+----
function processTransaction() {
- validate();
- persist();
- // TODO: add call to notifyListeners()
+ validate();
+ persist();
+ // TODO: add call to notifyListeners()
}
-```
+----
While being developed, being tested for suitability or something similar,
-`notifyListeners()` may not be included in the code at once. So instead of
+`notifyListeners()` may not be included in the code at once. So instead of
keeping it on a separate, long-lived branch, a feature flag can decide when the
new, partially implemented function will be called:
-```javascript
+[source,javascript]
+----
function processTransaction() {
- validate();
- persist();
- if (featureIsEnabled("activate-notify-listeners")) {
- notifyListeners();
- }
+ validate();
+ persist();
+ if (featureIsEnabled("activate-notify-listeners")) {
+ notifyListeners();
+ }
}
-```
+----
This allows your code to include `notifyListeners()`, and decide when to call it
-at runtime. For the price of extra things around the code, you get more
+at runtime. For the price of extra things around the code, you get more
dynamicity.
So the fundamental question to ask yourself when considering adding a feature
flag should be:
-> Am I willing to pay with code complexity to get dynamicity?
+____
+Am I willing to pay with code complexity to get dynamicity?
+____
-It is true that you can make the management of feature flags as
-straightforward as possible, but having no feature flags is simpler than having
-any. What you get in return is the ability to parameterize the behaviour of the
-application at runtime, without doing any code changes.
+It is true that you can make the management of feature flags as straightforward
+as possible, but having no feature flags is simpler than having any. What you
+get in return is the ability to parameterize the behaviour of the application at
+runtime, without doing any code changes.
Sometimes this added complexity may tilt the balance towards not using a feature
flag, and sometimes the flexibility of changing behaviour at runtime is
-absolutely worth the added complexity. This can vary a lot by code base, feature, but
-fundamentally by environment: its much cheaper to deploy a new version of a
-service than to release a new version of an app.
+absolutely worth the added complexity. This can vary a lot by code base,
+feature, but fundamentally by environment: its much cheaper to deploy a new
+version of a service than to release a new version of an app.
So the question of which environment is being targeted is key when reasoning
about costs and benefits of feature flags.
-[cd]: https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment
+== Control over the environment
-## Control over the environment
+:fdroid: https://f-droid.org/
+:bad-apple: https://www.paulgraham.com/apple.html
The key differentiator that makes the trade-offs apply differently is how much
control you have over the environment.
-When running a **backend** service, you usually are paying for the servers
-themselves, and can tweak them as you wish. This means you have full control do
-to code changes as you wish. Not only that, you decide when to do it, and for
+When running a *backend* service, you usually are paying for the servers
+themselves, and can tweak them as you wish. This means you have full control do
+to code changes as you wish. Not only that, you decide when to do it, and for
how long the transition will last.
-On the **frontend** you have less control: even though you can choose to make a
-new version available any time you wish, you can't force[^force] clients to
-immediately switch to the new version. That means that a) clients could skip
-upgrades at any time and b) you always have to keep backward and forward
-compatibility in mind.
+On the *frontend* you have less control: even though you can choose to make a
+new version available any time you wish, you can't
+force{empy}footnote:force[
+ Technically you could force a reload with JavaScript using
+ `window.location.reload()`, but that not only is invasive and impolite, but
+ also gives you the illusion that you have control over the client when you
+ actually don't: clients with disabled JavaScript would be immune to such
+ tactics.
+] clients to immediately switch to the new version. That means that a) clients
+could skip upgrades at any time and b) you always have to keep backward and
+forward compatibility in mind.
Even though I'm mentioning frontend directly, it applies to other environment
with similar characteristics: desktop applications, command-line programs,
-*etc*.
+_etc_.
-On **mobile** you have even less control: app stores need to allow your app to
-be updated, which could bite you when least desired. Theoretically you could
-make you APK available on third party stores like [F-Droid][f-droid], or even
-make the APK itself available for direct download, which would give you the same
+On *mobile* you have even less control: app stores need to allow your app to be
+updated, which could bite you when least desired. Theoretically you could make
+you APK available on third party stores like {fdroid}[F-Droid], or even make the
+APK itself available for direct download, which would give you the same
characteristics of a frontend application, but that happens less often.
-On iOS you can't even do that. You have to get Apple's blessing on every single
-update. Even though we already know that is a [bad idea][apple] for over a
-decade now, there isn't a way around it. This is where you have the least
+On iOS you can't even do that. You have to get Apple's blessing on every single
+update. Even though we already know that is a {bad-apple}[bad idea] for over a
+decade now, there isn't a way around it. This is where you have the least
control.
In practice, the amount of control you have will change how much you value
-dynamicity: the less control you have, the more valuable it is. In other words,
+dynamicity: the less control you have, the more valuable it is. In other words,
having a dynamic flag on the backend may or may not be worth it since you could
always update the code immediately after, but on iOS it is basically always
worth it.
-[f-droid]: https://f-droid.org/
-[^force]: Technically you could force a reload with JavaScript using
- `window.location.reload()`, but that not only is invasive and impolite, but
- also gives you the illusion that you have control over the client when you
- actually don't: clients with disabled JavaScript would be immune to such
- tactics.
+== Rollout
-[apple]: http://www.paulgraham.com/apple.html
+:kubernetes-deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
+:play-store-rollout: https://support.google.com/googleplay/android-developer/answer/6346149?hl=en
+:app-store-rolllout: https://help.apple.com/app-store-connect/#/dev3d65fcee1
-## Rollout
-
-A rollout is used to *roll out* a new version of software.
+A rollout is used to _roll out_ a new version of software.
They are usually short-lived, being relevant as long as the new code is being
-deployed. The most common rule is percentages.
+deployed. The most common rule is percentages.
-On the **backend**, it is common to find it on the deployment infrastructure
-itself, like canary servers, blue/green deployments,
-[a kubernetes deployment rollout][k8s], *etc*. You could do those manually, by
-having a dynamic control on the code itself, but rollbacks are cheap enough that
-people usually do a normal deployment and just give some extra attention to the
-metrics dashboard.
+On the *backend*, it is common to find it on the deployment infrastructure
+itself, like canary servers, blue/green deployments, {kubernetes-deployment}[a
+kubernetes deployment rollout], _etc_. You could do those manually, by having a
+dynamic control on the code itself, but rollbacks are cheap enough that people
+usually do a normal deployment and just give some extra attention to the metrics
+dashboard.
Any time you see a blue/green deployment, there is a rollout happening: most
likely a load balancer is starting to direct traffic to the new server, until
-reaching 100% of the traffic. Effectively, that is a rollout.
+reaching 100% of the traffic. Effectively, that is a rollout.
-On the **frontend**, you can selectively pick which user's will be able to
-download the new version of a page. You could use geographical region, IP,
+On the *frontend*, you can selectively pick which user's will be able to
+download the new version of a page. You could use geographical region, IP,
cookie or something similar to make this decision.
-CDN propagation delays and people not refreshing their web
-pages are also rollouts by themselves, since old and new versions of the
-software will coexist.
+CDN propagation delays and people not refreshing their web pages are also
+rollouts by themselves, since old and new versions of the software will coexist.
-On **mobile**, the Play Store allows you to perform
-fine-grained [staged rollouts][staged-rollouts], and the App Store allows you to
-perform limited [phased releases][phased-releases].
+On *mobile*, the Play Store allows you to perform fine-grained
+{play-store-rollout}[staged rollouts], and the App Store allows you to perform
+limited {app-store-rollout}[phased releases].
Both for Android and iOS, the user plays the role of making the download.
In summary: since you control the servers on the backend, you can do rollouts at
-will, and those are often found automated away in base infrastructure. On the
+will, and those are often found automated away in base infrastructure. On the
frontend and on mobile, there are ways to make new versions available, but users
may not download them immediately, and many different versions of the software
end up coexisting.
-[k8s]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
-[staged-rollouts]: https://support.google.com/googleplay/android-developer/answer/6346149?hl=en
-[phased-releases]: https://help.apple.com/app-store-connect/#/dev3d65fcee1
-
-## Feature flag
+== Feature flag
-A feature flag is a *flag* that tells the application on runtime to turn on or
-off a given *feature*. That means that the actual production code will have more
-than one possible code paths to go through, and that a new version of a feature
-coexists with the old version. The feature flag tells which part of the code to
-go through.
+A feature flag is a _flag_ that tells the application on runtime to turn on or
+off a given _feature_. That means that the actual production code will have
+more than one possible code paths to go through, and that a new version of a
+feature coexists with the old version. The feature flag tells which part of the
+code to go through.
They are usually medium-lived, being relevant as long as the new code is being
-developed. The most common rules are percentages, allow/deny lists, A/B groups
+developed. The most common rules are percentages, allow/deny lists, A/B groups
and client version.
-On the **backend**, those are useful for things that have a long development
-cycle, or that needs to done by steps. Consider loading the feature flag rules
-in memory when the application starts, so that you avoid querying a database
-or an external service for applying a feature flag rule and avoid flakiness on
-the result due to intermittent network failures.
+On the *backend*, those are useful for things that have a long development
+cycle, or that needs to done by steps. Consider loading the feature flag rules
+in memory when the application starts, so that you avoid querying a database or
+an external service for applying a feature flag rule and avoid flakiness on the
+result due to intermittent network failures.
-Since on the **frontend** you don't control when to update the client software,
+Since on the *frontend* you don't control when to update the client software,
you're left with applying the feature flag rule on the server, and exposing the
-value through an API for maximum dynamicity. This could be in the frontend code
+value through an API for maximum dynamicity. This could be in the frontend code
itself, and fallback to a "just refresh the page"/"just update to the latest
version" strategy for less dynamic scenarios.
-On **mobile** you can't even rely on a "just update to the latest version"
+On *mobile* you can't even rely on a "just update to the latest version"
strategy, since the code for the app could be updated to a new feature and be
-blocked on the store. Those cases aren't recurrent, but you should always assume
-the store will deny updates on critical moments so you don't find yourself with
-no cards to play. That means the only control you actually have is via
-the backend, by parameterizing the runtime of the application using the API. In
-practice, you should always have a feature flag to control any relevant piece of
-code. There is no such thing as "too small code change for a feature flag". What
-you should ask yourself is:
-
-> If the code I'm writing breaks and stays broken for around a month, do I care?
+blocked on the store. Those cases aren't recurrent, but you should always
+assume the store will deny updates on critical moments so you don't find
+yourself with no cards to play. That means the only control you actually have
+is via the backend, by parameterizing the runtime of the application using the
+API. In practice, you should always have a feature flag to control any relevant
+piece of code. There is no such thing as "too small code change for a feature
+flag". What you should ask yourself is:
+
+____
+If the code I'm writing breaks and stays broken for around a month, do I care?
+____
If you're doing an experimental screen, or something that will have a very small
-impact you might answer "no" to the above question. For everything else, the
+impact you might answer "no" to the above question. For everything else, the
answer will be "yes": bug fixes, layout changes, refactoring, new screen,
-filesystem/database changes, *etc*.
+filesystem/database changes, _etc_.
-## Experiment
+== Experiment
An experiment is a feature flag where you care about analytical value of the
-flag, and how it might impact user's behaviour. A feature flag with analytics.
+flag, and how it might impact user's behaviour. A feature flag with analytics.
They are also usually medium-lived, being relevant as long as the new code is
-being developed. The most common rule is A/B test.
+being developed. The most common rule is A/B test.
-On the **backend**, an experiment rely on an analytical environment that will
-pick the A/B test groups and distributions, which means those can't be held in
-memory easily. That also means that you'll need a fallback value in case
-fetching the group for a given customer fails.
+On the *backend*, an experiment rely on an analytical environment that will pick
+the A/B test groups and distributions, which means those can't be held in memory
+easily. That also means that you'll need a fallback value in case fetching the
+group for a given customer fails.
-On the **frontend** and on **mobile** they are no different from feature flags.
+On the *frontend* and on *mobile* they are no different from feature flags.
-## Operational toggle
+== Operational toggle
An operational toggle is like a system-level manual circuit breaker, where you
-turn on/off a feature, fail over the load to a different server, *etc*. They are
-useful switches to have during an incident.
+turn on/off a feature, fail over the load to a different server, _etc_. They
+are useful switches to have during an incident.
They are usually long-lived, being relevant as long as the code is in
-production. The most common rule is percentages.
+production. The most common rule is percentages.
They can be feature flags that are promoted to operational toggles on the
-**backend**, or may be purposefully put in place preventively or after a
+*backend*, or may be purposefully put in place preventively or after a
postmortem analysis.
-On the **frontend** and on **mobile** they are similar to feature flags, where
-the "feature" is being turned on and off, and the client interprets this value
-to show if the "feature" is available or unavailable.
+On the *frontend* and on *mobile* they are similar to feature flags, where the
+"feature" is being turned on and off, and the client interprets this value to
+show if the "feature" is available or unavailable.
-## Best practices
+== Best practices
-### Prefer dynamic content
+=== Prefer dynamic content
Even though feature flags give you more dynamicity, they're still somewhat
manual: you have to create one for a specific feature and change it by hand.
If you find yourself manually updating a feature flags every other day, or
-tweaking the percentages frequently, consider making it fully dynamic. Try
+tweaking the percentages frequently, consider making it fully dynamic. Try
using a dataset that is generated automatically, or computing the content on the
fly.
Say you have a configuration screen with a list of options and sub-options, and
-you're trying to find how to better structure this list. Instead of using a
-feature flag for switching between 3 and 5 options, make it fully dynamic. This
+you're trying to find how to better structure this list. Instead of using a
+feature flag for switching between 3 and 5 options, make it fully dynamic. This
way you'll be able to perform other tests that you didn't plan, and get more
flexibility out of it.
-### Use the client version to negotiate feature flags
+=== Use the client version to negotiate feature flags
After effectively finishing a feature, the old code that coexisted with the new
one will be deleted, and all traces of the transition will vanish from the code
-base. However if you just remove the feature flags from the API, all of the old
+base. However if you just remove the feature flags from the API, all of the old
versions of clients that relied on that value to show the new feature will go
downgrade to the old feature.
This means that you should avoid deleting client-facing feature flags, and
retire them instead: use the client version to decide when the feature is
stable, and return `true` for every client with a version greater or equal to
-that. This way you can stop thinking about the feature flag, and you don't break
-or downgrade clients that didn't upgrade past the transition.
+that. This way you can stop thinking about the feature flag, and you don't
+break or downgrade clients that didn't upgrade past the transition.
-### Beware of many nested feature flags
+=== Beware of many nested feature flags
Nested flags combine exponentially.
Pick strategic entry points or transitions eligible for feature flags, and
beware of their nesting.
-### Include feature flags in the development workflow
+=== Include feature flags in the development workflow
Add feature flags to the list of things to think about during whiteboarding, and
deleting/retiring a feature flags at the end of the development.
-### Always rely on a feature flag on the app
+=== Always rely on a feature flag on the app
-Again, there is no such thing "too small for a feature flag". Too many feature
-flags is a good problem to have, not the opposite. Automate the process of
+Again, there is no such thing "too small for a feature flag". Too many feature
+flags is a good problem to have, not the opposite. Automate the process of
creating a feature flag to lower its cost.
diff --git a/src/content/blog/2020/10/20/wrong-interviewing.adoc b/src/content/blog/2020/10/20/wrong-interviewing.adoc
index 9cdfefb..89f93b8 100644
--- a/src/content/blog/2020/10/20/wrong-interviewing.adoc
+++ b/src/content/blog/2020/10/20/wrong-interviewing.adoc
@@ -1,51 +1,49 @@
----
-title: How not to interview engineers
-date: 2020-10-20
-updated_at: 2020-10-24
-layout: post
-lang: en
-ref: how-not-to-interview-engineers
----
-This is a response to Slava's
-"[How to interview engineers][how-to-interview-engineers]" article. I initially
-thought it was a satire, [as have others][poes-law-comment], but he has
-[doubled down on it][slava-on-satire]:
-
-> (...) Some parts are slightly exaggerated for sure, but the essay isn't meant
-> as a joke.
+= How not to interview engineers
+
+:bad-article: https://defmacro.substack.com/p/how-to-interview-engineers
+:satire-comment: https://defmacro.substack.com/p/how-to-interview-engineers/comments#comment-599996
+:double-down: https://twitter.com/spakhm/status/1315754730740617216
+:poes-law: https://en.wikipedia.org/wiki/Poe%27s_law
+:hn-comment-1: https://news.ycombinator.com/item?id=24757511
+
+This is a response to Slava's "{bad-article}[How to interview engineers]"
+article. I initially thought it was a satire, {satire-comment}[as have others],
+but he has [doubled down on it]:
+
+____
+(...) Some parts are slightly exaggerated for sure, but the essay isn't meant as
+a joke.
+____
That being true, he completely misses the point on how to improve hiring, and
-proposes a worse alternative on many aspects. It doesn't qualify as provocative,
-it is just wrong.
+proposes a worse alternative on many aspects. It doesn't qualify as
+provocative, it is just wrong.
I was comfortable taking it as a satire, and I would just ignore the whole thing
if it wasn't (except for the technical memo part), but friends of mine
-considered it to be somewhat reasonable. This is a adapted version of parts of
-the discussions we had, risking becoming a gigantic showcase of
-[Poe's law][poes-law-wiki].
+considered it to be somewhat reasonable. This is a adapted version of parts of
+the discussions we had, risking becoming a gigantic showcase of {poes-law}[Poe's
+law].
In this piece, I will argument against his view, and propose an alternative
approach to improve hiring.
It is common to find people saying how broken technical hiring is, as well put
-in words by a phrase on [this comment][hn-satire]:
+in words by a phrase on {hn-comment-1}[this comment]:
-> Everyone loves to read and write about how developer interviewing is flawed,
-> but no one wants to go out on a limb and make suggestions about how to improve
-> it.
+____
+Everyone loves to read and write about how developer interviewing is flawed, but
+no one wants to go out on a limb and make suggestions about how to improve it.
+____
I guess Slava was trying to not fall on this trap, and make a suggestion on how
to improve instead, which all went terribly wrong.
-[how-to-interview-engineers]: https://defmacro.substack.com/p/how-to-interview-engineers
-[poes-law-comment]: https://defmacro.substack.com/p/how-to-interview-engineers/comments#comment-599996
-[slava-on-satire]: https://twitter.com/spakhm/status/1315754730740617216
-[poes-law-wiki]: https://en.wikipedia.org/wiki/Poe%27s_law
-[hn-satire]: https://news.ycombinator.com/item?id=24757511
+== What not to do
-## What not to do
+=== Time candidates
-### Time candidates
+:hammock-driven-talk: https://www.youtube.com/watch?v=f84n5oFoZBc
Timing the candidate shows up on the "talent" and "judgment" sections, and they
are both bad ideas for the same reason: programming is not a performance.
@@ -55,270 +53,280 @@ psychologists.
For a pianist, their state of mind during concerts is crucial: they not only
must be able to deal with stage anxiety, but to become really successful they
-will have to learn how to exploit it. The time window of the concert is what
+will have to learn how to exploit it. The time window of the concert is what
people practice thousands of hours for, and it is what defines one's career,
since how well all the practice went is irrelevant to the nature of the
-profession. Being able to leverage stage anxiety is an actual goal of them.
+profession. Being able to leverage stage anxiety is an actual goal of them.
That is also applicable to athletes, where the execution during a competition
makes them sink or swim, regardless of how all the training was.
-The same cannot be said about composers, though. They are more like book
+The same cannot be said about composers, though. They are more like book
writers, where the value is not on very few moments with high adrenaline, but on
-the aggregate over hours, days, weeks, months and years. A composer may have a
+the aggregate over hours, days, weeks, months and years. A composer may have a
deadline to finish a song in five weeks, but it doesn't really matter if it is
done on a single night, every morning between 6 and 9, at the very last week, or
-any other way. No rigid time structure applies, only whatever fits best to the
+any other way. No rigid time structure applies, only whatever fits best to the
composer.
Programming is more like composing than doing a concert, which is another way of
-saying that programming is not a performance. People don't practice algorithms
+saying that programming is not a performance. People don't practice algorithms
for months to keep them at their fingertips, so that finally in a single
afternoon they can sit down and write everything at once in a rigid 4 hours
window, and launch it immediately after.
Instead software is built iteratively, by making small additions, than
-refactoring the implementation, fixing bugs, writing a lot at once, *etc*.
-all while they get a firmer grasp of the problem, stop to think about it, come
-up with new ideas, *etc*.
+refactoring the implementation, fixing bugs, writing a lot at once, _etc_. all
+while they get a firmer grasp of the problem, stop to think about it, come up
+with new ideas, _etc_.
Some specifically plan for including spaced pauses, and call it
-"[Hammock Driven Development][hammock-driven-development]", which is just
-artist's "creative idleness" for hackers.
+"{hammock-driven-talk}[Hammock Driven Development]", which is just artist's
+"creative idleness" for hackers.
Unless you're hiring for a live coding group, a competitive programming team, or
a professional live demoer, timing the candidate that way is more harmful than
-useful. This type of timing doesn't find good programmers, it finds performant
+useful. This type of timing doesn't find good programmers, it finds performant
programmers, which isn't the same thing, and you'll end up with people who can
do great work on small problems but who might be unable to deal with big
-problems, and loose those who can very well handle huge problems, slowly. If you
-are lucky you'll get performant people who can also handle big problems on the
-long term, but maybe not.
+problems, and loose those who can very well handle huge problems, slowly. If
+you are lucky you'll get performant people who can also handle big problems on
+the long term, but maybe not.
An incident is the closest to a "performance" that it gets, and yet it is still
-dramatically different. Surely it is a high stress scenario, but while people
+dramatically different. Surely it is a high stress scenario, but while people
are trying to find a root cause and solve the problem, only the downtime itself
-is visible to the exterior. It is like being part of the support staff backstage
-during a play: even though execution matters, you're still not on the spot.
-During an incident you're doing debugging in anger rather than live coding.
+is visible to the exterior. It is like being part of the support staff
+backstage during a play: even though execution matters, you're still not on the
+spot. During an incident you're doing debugging in anger rather than live
+coding.
-Although giving a candidate the task to write a "technical memo" has
-potential to get a measure of the written communication skills of someone, doing
-so in a hard time window also misses the point for the same reasons.
+Although giving a candidate the task to write a "technical memo" has potential
+to get a measure of the written communication skills of someone, doing so in a
+hard time window also misses the point for the same reasons.
-[hammock-driven-development]: https://www.youtube.com/watch?v=f84n5oFoZBc
+=== Pay attention to typing speed
-### Pay attention to typing speed
+:dijkstra-typing: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/EWD512.html
+:speech-to-text: https://www.youtube.com/watch?v=Mz3JeYfBTcY
+:j-lang: https://www.jsoftware.com/#/
Typing is speed in never the bottleneck of a programmer, no matter how great
they are.
-As [Dijkstra said][dijkstra-typing]:
+As {dijkstra-typing}[Dijkstra said]:
-> But programming, when stripped of all its circumstantial irrelevancies, boils
-> down to no more and no less than very effective thinking so as to avoid
-> unmastered complexity, to very vigorous separation of your many different
-> concerns.
+____
+But programming, when stripped of all its circumstantial irrelevancies, boils
+down to no more and no less than very effective thinking so as to avoid
+unmastered complexity, to very vigorous separation of your many different
+concerns.
+____
In other words, programming is not about typing, it is about thinking.
Otherwise, the way to get those star programmers that can't type fast enough a
-huge productivity boost is to give them a touch typing course. If they are so
+huge productivity boost is to give them a touch typing course. If they are so
productive with typing speed being a limitation, imagine what they could
accomplish if they had razor sharp touch typing skills?
Also, why stop there? A good touch typist can do 90 WPM (words per minute), and
a great one can do 120 WPM, but with a stenography keyboard they get to 200
-WPM+. That is double the productivity! Why not try
-[speech-to-text][perl-out-loud]? Make them all use [J][j-lang] so they all need
-to type less! How come nobody thought of that?
+WPM+. That is double the productivity! Why not try
+{speech-to-text}[speech-to-text]? Make them all use {j-lang}[J] so they all need
+to type less! How come nobody thought of that?
And if someone couldn't solve the programming puzzle in the given time window,
but could come back in the following day with an implementation that is not only
faster, but uses less memory, was simpler to understand and easier to read than
anybody else? You'd be losing that person too.
-[dijkstra-typing]: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/EWD512.html
-[j-lang]: https://www.jsoftware.com/#/
-[perl-out-loud]: https://www.youtube.com/watch?v=Mz3JeYfBTcY
+=== IQ
-### IQ
+:determination-article: https://www.paulgraham.com/determination.html
+:scihub-article: https://sci-hub.do/https://psycnet.apa.org/doiLanding?doi=10.1037%2F1076-8971.6.1.33
-For "building an extraordinary team at a hard technology startup", intelligence
-is not the most important, [determination is][pg-determination].
+For "building an extraordinary team at a hard technology startup",
+intelligence is not the most important,
+{determination-article}[determination is].
-And talent isn't "IQ specialized for engineers". IQ itself isn't a measure of how
-intelligent someone is. Ever since Alfred Binet with Théodore Simon started to
-formalize what would become IQ tests years later, they already acknowledged
+And talent isn't "IQ specialized for engineers". IQ itself isn't a measure of
+how intelligent someone is. Ever since Alfred Binet with Théodore Simon started
+to formalize what would become IQ tests years later, they already acknowledged
limitations of the technique for measuring intelligence, which is
-[still true today][scihub-paper].
+{scihub-article}[still true today].
So having a high IQ tells only how smart people are for a particular aspect of
-intelligence, which is not representative of programming. There are numerous
+intelligence, which is not representative of programming. There are numerous
aspects of programming that are covered by IQ measurement: how to name variables
and functions, how to create models which are compatible with schema evolution,
how to make the system dynamic for runtime parameterization without making it
fragile, how to measure and observe performance and availability, how to pick
-between acquiring and paying technical debt, *etc*.
+between acquiring and paying technical debt, _etc_.
Not to say about everything else that a programmer does that is not purely
-programming. Saying high IQ correlates with great programming is a stretch, at
+programming. Saying high IQ correlates with great programming is a stretch, at
best.
-[pg-determination]: http://www.paulgraham.com/determination.html
-[scihub-paper]: https://sci-hub.do/https://psycnet.apa.org/doiLanding?doi=10.1037%2F1076-8971.6.1.33
-
-### Ditch HR
+=== Ditch HR
Slava tangentially picks on HR, and I will digress on that a bit:
-> A good rule of thumb is that if a question could be asked by an intern in HR,
-> it's a non-differential signaling question.
+____
+A good rule of thumb is that if a question could be asked by an intern in HR,
+it's a non-differential signaling question.
+____
-Stretching it, this is a rather snobbish view of HR. Why is it that an intern in
-HR can't make signaling questions? Could the same be said of an intern in
+Stretching it, this is a rather snobbish view of HR. Why is it that an intern
+in HR can't make signaling questions? Could the same be said of an intern in
engineering?
-In other words: is the question not signaling because the one
-asking is from HR, or because the one asking is an intern? If the latter, than
-he's just arguing that interns have no place in interviewing, but if the former
-than he was picking on HR.
+In other words: is the question not signaling because the one asking is from HR,
+or because the one asking is an intern? If the latter, than he's just arguing
+that interns have no place in interviewing, but if the former than he was
+picking on HR.
Extrapolating that, it is common to find people who don't value HR's work, and
only see them as inferiors doing unpleasant work, and who aren't capable enough
-(or *smart* enough) to learn programming.
+(or _smart_ enough) to learn programming.
-This is equivalent to people who work primarily on backend, and see others working on
-frontend struggling and say: "isn't it just building views and showing them on
-the browser? How could it possibly be that hard? I bet I could do it better,
-with 20% of code". As you already know, the answer to it is "well, why don't you
-go do it, then?".
+This is equivalent to people who work primarily on backend, and see others
+working on frontend struggling and say: "isn't it just building views and
+showing them on the browser? How could it possibly be that hard? I bet I could
+do it better, with 20% of code". As you already know, the answer to it is
+"well, why don't you go do it, then?".
This sense of superiority ignores the fact that HR have actual professionals
-doing actual hard work, not unlike programmers. If HR is inferior and so easy,
+doing actual hard work, not unlike programmers. If HR is inferior and so easy,
why not automate everything away and get rid of a whole department?
I don't attribute this world view to Slava, this is only an extrapolation of a
snippet of the article.
-### Draconian mistreating of candidates
+=== Draconian mistreating of candidates
+
+:bad-apple: https://www.paulgraham.com/apple.html
+:be-good: https://www.paulgraham.com/good.html
If I found out that people employed theatrics in my interview so that I could
feel I've "earned the privilege to work at your company", I would quit.
If your moral compass is so broken that you are comfortable mistreating me while
I'm a candidate, I immediately assume you will also mistreat me as an employee,
-and that the company is not a good place to work, as
-[evil begets stupidity][evil-begets-stupidity]:
-
-> But the other reason programmers are fussy, I think, is that evil begets
-> stupidity. An organization that wins by exercising power starts to lose the
-> ability to win by doing better work. And it's not fun for a smart person to
-> work in a place where the best ideas aren't the ones that win. I think the
-> reason Google embraced "Don't be evil" so eagerly was not so much to impress
-> the outside world as to inoculate themselves against arrogance.
+and that the company is not a good place to work, as {bad-apple}[evil begets
+stupidity]:
+
+____
+But the other reason programmers are fussy, I think, is that evil begets
+stupidity. An organization that wins by exercising power starts to lose the
+ability to win by doing better work. And it's not fun for a smart person to
+work in a place where the best ideas aren't the ones that win. I think the
+reason Google embraced "Don't be evil" so eagerly was not so much to impress the
+outside world as to inoculate themselves against arrogance.
+____
Paul Graham goes beyond "don't be evil" with a better motto:
-"[be good][pg-be-good]".
+"{be-good}[be good]".
Abusing the asymmetric nature of an interview to increase the chance that the
-candidate will accept the offer is, well, abusive. I doubt a solid team can
+candidate will accept the offer is, well, abusive. I doubt a solid team can
actually be built on such poor foundations, surrounded by such evil measures.
And if you really want to give engineers "the measure of whoever they're going
to be working with", there are plenty of reasonable ways of doing it that don't
include performing fake interviews.
-[pg-be-good]: http://www.paulgraham.com/good.html
-[evil-begets-stupidity]: http://www.paulgraham.com/apple.html
-
-### Personality tests
+=== Personality tests
Personality tests around the world need to be a) translated, b) adapted and c)
-validated. Even though a given test may be applicable and useful in a country,
+validated. Even though a given test may be applicable and useful in a country,
this doesn't imply it will work for other countries.
Not only tests usually come with translation guidelines, but also its
applicability needs to be validated again after the translation and adaptation
is done to see if the test still measures what it is supposed to.
-That is also true within the same language. If a test is shown to work in
-England, it may not work in New Zealand, in spite of both speaking english. The
+That is also true within the same language. If a test is shown to work in
+England, it may not work in New Zealand, in spite of both speaking english. The
cultural context difference is influent to the point of invalidating a test and
making it be no longer valid.
-Irregardless of the validity of the proposed "big five" personality test,
-saying "just use attributes x, y and z this test and you'll be fine" is a rough
+Irregardless of the validity of the proposed "big five" personality test, saying
+"just use attributes x, y and z this test and you'll be fine" is a rough
simplification, much like saying "just use Raft for distributed systems, after
all it has been proven to work" shows he throws all of that background away.
So much as applying personality tests themselves is not a trivial task, and
psychologists do need special training to become able to effectively apply one.
-### More cargo culting
+=== More cargo culting
+
+:cult: https://calteches.library.caltech.edu/51/2/CargoCult.htm
+:cult-archived: https://web.archive.org/web/20201003090303/https://calteches.library.caltech.edu/51/2/CargoCult.htm
He calls the ill-defined "industry standard" to be cargo-culting, but his
proposal isn't sound enough to not become one.
-Even if the ideas were good, they aren't solid enough, or based on solid
-enough things to make them stand out by themselves. Why is it that talent,
-judgment and personality are required to determine the fitness of a good
-candidate? Why not 2, 5, or 20 things? Why those specific 3? Why is talent
-defined like that? Is it just because he found talent to be like that?
+Even if the ideas were good, they aren't solid enough, or based on solid enough
+things to make them stand out by themselves. Why is it that talent, judgment
+and personality are required to determine the fitness of a good candidate? Why
+not 2, 5, or 20 things? Why those specific 3? Why is talent defined like that?
+Is it just because he found talent to be like that?
Isn't that definitionally also
-[cargo-culting][cargo-culting][^cargo-culting-archive]? Isn't he just repeating
-whatever he found to work form him, without understanding why?
+{cult}[cargo-culting]footnote:cargo-cult[
+ {cult-archived}[Archived version].
+]? Isn't he just repeating whatever he found to work form him, without
+understanding why?
What Feynman proposes is actually the opposite:
-> In summary, the idea is to try to give **all** of the information to help others
-> to judge the value of your contribution; not just the information that leads
-> to judgment in one particular direction or another.
+____
+In summary, the idea is to try to give *all* of the information to help others
+to judge the value of your contribution; not just the information that leads to
+judgment in one particular direction or another.
+____
What Slava did was just another form of cargo culting, but this was one that he
believed to work.
-[cargo-culting]: http://calteches.library.caltech.edu/51/2/CargoCult.htm
-[^cargo-culting-archive]: [Archived version](https://web.archive.org/web/20201003090303/http://calteches.library.caltech.edu/51/2/CargoCult.htm).
-
-## What to do
+== What to do
I will not give you a list of things that "worked for me, thus they are
-correct". I won't either critique the current "industry standard", nor what I've
-learned from interviewing engineers.
+correct". I won't either critique the current "industry standard", nor what
+I've learned from interviewing engineers.
Instead, I'd like to invite you to learn from history, and from what other
professionals have to teach us.
Programming isn't an odd profession, where everything about it is different from
-anything else. It is just another episode in the "technology" series, which has
-seasons since before recorded history. It may be an episode where things move a
+anything else. It is just another episode in the "technology" series, which has
+seasons since before recorded history. It may be an episode where things move a
bit faster, but it is fundamentally the same.
-So here is the key idea: what people did *before* software engineering?
+So here is the key idea: what people did _before_ software engineering?
-What hiring is like for engineers in other areas? Don't civil, electrical and
+What hiring is like for engineers in other areas? Don't civil, electrical and
other types of engineering exist for much, much longer than software engineering
-does? What have those centuries of accumulated experience thought the world
+does? What have those centuries of accumulated experience thought the world
about technical hiring?
What studies were performed on the different success rate of interviewing
-strategies? What have they done right and what have they done wrong?
+strategies? What have they done right and what have they done wrong?
What is the purpose of HR? Why do they even exist? Do we need them, and if so,
-what for? What is the value they bring, since everybody insist on building an HR
-department in their companies? Is the existence of HR another form of cargo
+what for? What is the value they bring, since everybody insist on building an
+HR department in their companies? Is the existence of HR another form of cargo
culting?
What is industrial and organizational psychology? What is that field of study?
What do they specialize in? What have they learned since the discipline
-appeared? What have they done right and wrong over history? Is is the current
-academic consensus on that area? What is a hot debate topic in academia on that
-area? What is the current bleeding edge of research? What can they teach us
-about hiring? What can they teach us about technical hiring?
+appeared? What have they done right and wrong over history? Is is the current
+academic consensus on that area? What is a hot debate topic in academia on that
+area? What is the current bleeding edge of research? What can they teach us
+about hiring? What can they teach us about technical hiring?
-## Conclusion
+== Conclusion
If all I've said makes me a "no hire" in the proposed framework, I'm really
glad.
diff --git a/src/content/blog/2020/11/07/diy-bugs.adoc b/src/content/blog/2020/11/07/diy-bugs.adoc
index b1dd117..0f561c1 100644
--- a/src/content/blog/2020/11/07/diy-bugs.adoc
+++ b/src/content/blog/2020/11/07/diy-bugs.adoc
@@ -1,79 +1,67 @@
----
-
-title: DIY an offline bug tracker with text files, Git and email
-
-date: 2020-11-07
-
-updated_at: 2021-08-14
-
-layout: post
-
-lang: en
-
-ref: diy-an-offline-bug-tracker-with-text-files-git-and-email
-
----
-
-When [push comes to shove][youtube-dl-takedown-notice], the operational aspects
-of governance of a software project matter a lot. And everybody likes to chime
-in with their alternative of how to avoid single points of failure in project
+= DIY an offline bug tracker with text files, Git and email
+
+:attack-on-ytdl: https://github.com/github/dmca/blob/master/2020/10/2020-10-23-RIAA.md
+:list-discussions: https://sourcehut.org/blog/2020-10-29-how-mailing-lists-prevent-censorship/
+:docs-in-repo: https://podcast.writethedocs.org/2017/01/25/episode-3-trends/
+:ci-in-notes: link:../../../../tils/2020/11/30/git-notes-ci.html
+:todos-mui: https://man.sr.ht/todo.sr.ht/#email-access
+:git-bug-bridges: https://github.com/MichaelMure/git-bug#bridges
+
+When {attack-on-ytdl}[push comes to shove], the operational aspects of
+governance of a software project matter a lot. And everybody likes to chime in
+with their alternative of how to avoid single points of failure in project
governance, just like I'm doing right now.
The most valuable assets of a project are:
-1. source code
-2. discussions
-3. documentation
-4. builds
-5. tasks and bugs
+. source code
+. discussions
+. documentation
+. builds
+. tasks and bugs
-For **source code**, Git and other DVCS solve that already: everybody gets a
-full copy of the entire source code.
+For *source code*, Git and other DVCS solve that already: everybody gets a full
+copy of the entire source code.
If your code forge is compromised, moving it to a new one takes a couple of
-minutes, if there isn't a secondary remote serving as mirror already. In this
+minutes, if there isn't a secondary remote serving as mirror already. In this
case, no action is required.
-If you're having your **discussions** by email,
-"[taking this archive somewhere else and carrying on is effortless][sourcehut-ml]".
+If you're having your *discussions* by email, "{list-discussions}[taking this
+archive somewhere else and carrying on is effortless]".
Besides, make sure to backup archives of past discussions so that the history is
also preserved when this migration happens.
-The **documentation** should
-[live inside the repository itself][writethedocs-in-repo][^writethedocs-in-repo],
-so that not only it gets first class treatment, but also gets distributed to
-everybody too. Migrating the code to a new forge already migrates the
+The *documentation* should {docs-in-repo}[live inside the repository
+itself]footnote:writethedocs-in-repo[
+ Described as "the ultimate marriage of the two". Starts at time 31:50.
+], so that not only it gets first class treatment, but also gets distributed to
+everybody too. Migrating the code to a new forge already migrates the
documentation with it.
-[^writethedocs-in-repo]: Described as "the ultimate marriage of the two". Starts
- at time 31:50.
-
-As long as you keep the **builds** vendor neutral, the migration should only
+As long as you keep the *builds* vendor neutral, the migration should only
involve adapting how you call your `tests.sh` from the format of
-`provider-1.yml` uses to the format that `provider-2.yml` accepts.
-It isn't valuable to carry the build history with the project, as this data
-quickly decays in value as weeks and months go by, but for simple text logs
-[using Git notes] may be just enough, and they would be replicated with the rest
-of the repository.
-
-[using Git notes]: {% link _tils/2020-11-30-storing-ci-data-on-git-notes.md %}
-
-But for **tasks and bugs** many rely on a vendor-specific service, where you
-register and manage those issues via a web browser. Some provide an
-[interface for interacting via email][todos-srht-email] or an API for
-[bridging local bugs with vendor-specific services][git-bug-bridges]. But
+`provider-1.yml` uses to the format that `provider-2.yml` accepts. It isn't
+valuable to carry the build history with the project, as this data quickly
+decays in value as weeks and months go by, but for simple text logs
+{ci-in-notes}[using Git notes] may be just enough, and they would be replicated
+with the rest of the repository.
+
+But for *tasks and bugs* many rely on a vendor-specific service, where
+you register and manage those issues via a web browser. Some provide an
+{todos-mui}[interface for interacting via email] or an API for
+{git-bug-bridges[bridging local bugs with vendor-specific services]. But
they're all layers around the service, that disguises it as being a central
-point of failure, which when compromised would lead to data loss. When push comes
-to shove, you'd loose data.
+point of failure, which when compromised would lead to data loss. When push
+comes to shove, you'd loose data.
-[youtube-dl-takedown-notice]: https://github.com/github/dmca/blob/master/2020/10/2020-10-23-RIAA.md
-[sourcehut-ml]: https://sourcehut.org/blog/2020-10-29-how-mailing-lists-prevent-censorship/
-[writethedocs-in-repo]: https://podcast.writethedocs.org/2017/01/25/episode-3-trends/
-[todos-srht-email]: https://man.sr.ht/todo.sr.ht/#email-access
-[git-bug-bridges]: https://github.com/MichaelMure/git-bug#bridges
+== Alternative: text files, Git and email
-## Alternative: text files, Git and email
+:todos-example: https://euandre.org/git/remembering/tree/TODOs.md?id=3f727802cb73ab7aa139ca52e729fd106ea916d0
+:todos-script: https://euandre.org/git/remembering/tree/aux/workflow/TODOs.sh?id=3f727802cb73ab7aa139ca52e729fd106ea916d0
+:todos-html: https://euandreh.xyz/remembering/TODOs.html
+:fossil-tickets: https://fossil-scm.org/home/doc/trunk/www/bugtheory.wiki
Why not do the same as documentation, and move tasks and bugs into the
repository itself?
@@ -81,28 +69,24 @@ repository itself?
It requires no extra tool to be installed, and fits right in the already
existing workflow for source code and documentation.
-I like to keep a [`TODOs.md`] file at the repository top-level, with
-two relevant sections: "tasks" and "bugs". Then when building the documentation
-I'll just [generate an HTML file from it], and [publish] it alongside the static
-website. All that is done on the main branch.
+I like to keep a {todos-example}[`TODOs.md`] file at the repository top-level,
+with two relevant sections: "tasks" and "bugs". Then when building the
+documentation I'll just {todos-script}[generate an HTML file from it], and
+{todos-html}[publish] it alongside the static website. All that is done on the
+main branch.
Any issues discussions are done in the mailing list, and a reference to a
-discussion could be added to the ticket itself later on. External contributors
+discussion could be added to the ticket itself later on. External contributors
can file tickets by sending a patch.
The good thing about this solution is that it works for 99% of projects out
there.
-For the other 1%, having Fossil's "[tickets][fossil-tickets]" could be an
+For the other 1%, having Fossil's "{fossil-tickets}[tickets]" could be an
alternative, but you may not want to migrate your project to Fossil to get those
niceties.
Even though I keep a `TODOs.md` file on the main branch, you can have a `tasks`
- branch with a `task-n.md` file for each task, or any other way you like.
+branch with a `task-n.md` file for each task, or any other way you like.
These tools are familiar enough that you can adjust it to fit your workflow.
-
-[`TODOs.md`]: https://euandre.org/git/remembering/tree/TODOs.md?id=3f727802cb73ab7aa139ca52e729fd106ea916d0
-[generate an HTML file from it]: https://euandre.org/git/remembering/tree/aux/workflow/TODOs.sh?id=3f727802cb73ab7aa139ca52e729fd106ea916d0
-[publish]: https://euandreh.xyz/remembering/TODOs.html
-[fossil-tickets]: https://fossil-scm.org/home/doc/trunk/www/bugtheory.wiki
diff --git a/src/content/blog/2020/11/08/paradigm-shift-review.adoc b/src/content/blog/2020/11/08/paradigm-shift-review.adoc
index c98c131..dd31f87 100644
--- a/src/content/blog/2020/11/08/paradigm-shift-review.adoc
+++ b/src/content/blog/2020/11/08/paradigm-shift-review.adoc
@@ -1,164 +1,153 @@
----
+= The Next Paradigm Shift in Programming - video review
-title: The Next Paradigm Shift in Programming - video review
+:reviewed-video: https://www.youtube.com/watch?v=6YbK8o9rZfI
-date: 2020-11-08
+This is a review with comments of "{reviewed-video}[The Next Paradigm Shift in
+Programming]", by Richard Feldman.
-layout: post
-
-lang: en
-
-ref: the-next-paradigm-shift-in-programming-video-review
-
-eu_categories: video review
-
----
-
-This is a review with comments of
-"[The Next Paradigm Shift in Programming][video-link]", by Richard Feldman.
-
-This video was *strongly* suggested to me by a colleague. I wanted to discuss it
-with her, and when drafting my response I figured I could publish it publicly
+This video was _strongly_ suggested to me by a colleague. I wanted to discuss
+it with her, and when drafting my response I figured I could publish it publicly
instead.
Before anything else, let me just be clear: I really like the talk, and I think
-Richard is a great public speaker. I've watched several of his talks over the
+Richard is a great public speaker. I've watched several of his talks over the
years, and I feel I've followed his career at a distance, with much respect.
This isn't a piece criticizing him personally, and I agree with almost
-everything he said. These are just some comments but also nitpicks on a few
+everything he said. These are just some comments but also nitpicks on a few
topics I think he missed, or that I view differently.
-[video-link]: https://www.youtube.com/watch?v=6YbK8o9rZfI
+== Structured programming
-## Structured programming
+:forgotten-art-video: https://www.youtube.com/watch?v=SFv8Wm2HdNM
-The historical overview at the beginning is very good. In fact, the very video I
-watched previously was about structured programming!
+The historical overview at the beginning is very good. In fact, the very video
+I watched previously was about structured programming!
-Kevlin Henney on
-"[The Forgotten Art of Structured Programming][structured-programming]" does a
-deep-dive on the topic of structured programming, and how on his view it is
-still hidden in our code, when we do a `continue` or a `break` in some ways.
-Even though it is less common to see an explicit `goto` in code these days, many
-of the original arguments of Dijkstra against explicit `goto`s is applicable to
-other constructs, too.
+Kevlin Henney on "{forgotten-art-video}[The Forgotten Art of Structured
+Programming]" does a deep-dive on the topic of structured programming, and how
+on his view it is still hidden in our code, when we do a `continue` or a `break`
+in some ways. Even though it is less common to see an explicit `goto` in code
+these days, many of the original arguments of Dijkstra against explicit `goto`s
+is applicable to other constructs, too.
-This is a very mature view, and I like how he goes beyond the
-"don't use `goto`s" heuristic and proposes and a much more nuanced understanding
-of what "structured programming" means.
+This is a very mature view, and I like how he goes beyond the "don't use
+`goto`s" heuristic and proposes and a much more nuanced understanding of what
+"structured programming" means.
In a few minutes, Richard is able to condense most of the significant bits of
-Kevlin's talk in a didactical way. Good job.
+Kevlin's talk in a didactical way. Good job.
-[structured-programming]: https://www.youtube.com/watch?v=SFv8Wm2HdNM
+== OOP like a distributed system
-## OOP like a distributed system
+:joe-oop: https://www.infoq.com/interviews/johnson-armstrong-oop/
+:rich-hickey-oop: https://www.youtube.com/watch?v=ROor6_NGIWU
-Richard extrapolates Alan Kay's original vision of OOP, and he concludes that
-it is more like a distributed system that how people think about OOP these days.
+Richard extrapolates Alan Kay's original vision of OOP, and he concludes that it
+is more like a distributed system that how people think about OOP these days.
But he then states that this is a rather bad idea, and we shouldn't pursue it,
given that distributed systems are known to be hard.
-However, his extrapolation isn't really impossible, bad or an absurd. In fact,
-it has been followed through by Erlang. Joe Armstrong used to say that
-"[Erlang might the only OOP language][erlang-oop]", since it actually adopted
-this paradigm.
+However, his extrapolation isn't really impossible, bad or an absurd. In fact,
+it has been followed through by Erlang. Joe Armstrong used to say that
+"{joe-oop}[Erlang might the only OOP language]", since it actually adopted this
+paradigm.
-But Erlang is a functional language. So this "OOP as a distributed system" view
+But Erlang is a functional language. So this "OOP as a distributed system" view
is more about designing systems in the large than programs in the small.
There is a switch of levels in this comparison I'm making, as can be done with
any language or paradigm: you can have a functional-like system that is built
with an OOP language (like a compiler, that given the same input will produce
-the same output), or an OOP-like system that is built with a functional language
-(Rich Hickey calls it
-"[OOP in the large][langsys]"[^the-language-of-the-system]).
+the same output), or an OOP-like system that is built with a functional
+language (Rich Hickey calls it "{rich-hickey-oop}[OOP in the
+large]"footnote:langsys[
+ From 24:05 to 27:45.
+]).
So this jump from in-process paradigm to distributed paradigm is rather a big
one, and I don't think you he can argue that OOP has anything to say about
-software distribution across nodes. You can still have Erlang actors that run
+software distribution across nodes. You can still have Erlang actors that run
independently and send messages to each other without a network between them.
Any OTP application deployed on a single node effectively works like that.
-I think he went a bit too far with this extrapolation. Even though I agree it is
-a logical a fair one, it isn't evidently bad as he painted. I would be fine
-working with a single-node OTP application and seeing someone call it "a *real*
+I think he went a bit too far with this extrapolation. Even though I agree it
+is a logical a fair one, it isn't evidently bad as he painted. I would be fine
+working with a single-node OTP application and seeing someone call it "a _real_
OOP program".
-[erlang-oop]: https://www.infoq.com/interviews/johnson-armstrong-oop/
-[langsys]: https://www.youtube.com/watch?v=ROor6_NGIWU
-[^the-language-of-the-system]: From 24:05 to 27:45.
+== First class immutability
-## First class immutability
+:immer: https://sinusoid.es/immer/
+:immutable-js: https://immutable-js.github.io/immutable-js/
-I agree with his view of languages moving towards the functional paradigm.
-But I think you can narrow down the "first-class immutability" feature he points
-out as present on modern functional programming languages to "first-class
-immutable data structures".
+I agree with his view of languages moving towards the functional paradigm. But
+I think you can narrow down the "first-class immutability" feature he points out
+as present on modern functional programming languages to "first-class immutable
+data structures".
I wouldn't categorize a language as "supporting functional programming style"
-without a library for functional data structures it. By discipline you can avoid
-side-effects, write pure functions as much as possible, and pass functions as
-arguments around is almost every language these days, but if when changing an
+without a library for functional data structures it. By discipline you can
+avoid side-effects, write pure functions as much as possible, and pass functions
+as arguments around is almost every language these days, but if when changing an
element of a vector mutates things in-place, that is still not functional
programming.
To avoid that, you end-up needing to make clones of objects to pass to a
-function, using freezes or other workarounds. All those cases are when the
+function, using freezes or other workarounds. All those cases are when the
underlying mix of OOP and functional programming fail.
There are some languages with third-party libraries that provide functional data
-structures, like [immer][immer] for C++, or [ImmutableJS][immutablejs] for
+structures, like {immer}[immer] for C++, or {immutable-js}[ImmutableJS] for
JavaScript.
But functional programming is more easily achievable in languages that have them
built-in, like Erlang, Elm and Clojure.
-[immer]: https://sinusoid.es/immer/
-[immutablejs]: https://immutable-js.github.io/immutable-js/
+== Managed side-effects
-## Managed side-effects
+:redux: https://redux.js.org/
+:re-frame: https://github.com/Day8/re-frame
His proposal of adopting managed side-effects as a first-class language concept
is really intriguing.
-This is something you can achieve with a library, like [Redux][redux] for JavaScript or
-[re-frame][re-frame] for Clojure.
+This is something you can achieve with a library, like {redux}[Redux] for
+JavaScript or {re-frame}[re-frame] for Clojure.
I haven't worked with a language with managed side-effects at scale, and I don't
-feel this is a problem with Clojure or Erlang. But is this me finding a flaw in
-his argument or not acknowledging a benefit unknown to me? This is a provocative
-question I ask myself.
+feel this is a problem with Clojure or Erlang. But is this me finding a flaw in
+his argument or not acknowledging a benefit unknown to me? This is a
+provocative question I ask myself.
Also all FP languages with managed side-effects I know are statically-typed, and
-all dynamically-typed FP languages I know don't have managed side-effects baked in.
+all dynamically-typed FP languages I know don't have managed side-effects baked
+in.
-[redux]: https://redux.js.org/
-[re-frame]: https://github.com/Day8/re-frame
+== What about declarative programming?
-## What about declarative programming?
+:tarpit-article: https://curtclifton.net/papers/MoseleyMarks06a.pdf
-In "[Out of the Tar Pit][tar-pit]", B. Moseley and P. Marks go beyond his view
-of functional programming as the basis, and name a possible "functional
-relational programming" as an even better solution. They explicitly call out
+In "{tarpit-article}[Out of the Tar Pit]", B. Moseley and P. Marks go beyond his
+view of functional programming as the basis, and name a possible "functional
+relational programming" as an even better solution. They explicitly call out
some flaws in most of the modern functional programming languages, and instead
pick declarative programming as an even better starting paradigm.
If the next paradigm shift is towards functional programming, will the following
shift be towards declarative programming?
-[tar-pit]: http://curtclifton.net/papers/MoseleyMarks06a.pdf
+== Conclusion
-## Conclusion
+:simple-made-easy: https://www.infoq.com/presentations/Simple-Made-Easy/
Beyond all Richard said, I also hear often bring up functional programming when
talking about utilizing all cores of a computer, and how FP can help with that.
Rich Hickey makes a great case for single-process FP on his famous talk
-"[Simple Made Easy][simple-made-easy]".
-
-[simple-made-easy]: https://www.infoq.com/presentations/Simple-Made-Easy/
+"{simple-made-easy}[Simple Made Easy]".
-<!-- I find this conclusion too short, and it doesn't revisits the main points -->
-<!-- presented on the body of the article. I won't rewrite it now, but it would be an -->
-<!-- improvement to extend it to do so. -->
+////
+I find this conclusion too short, and it doesn't revisits the main points
+presented on the body of the article. I won't rewrite it now, but it would be an
+improvement to extend it to do so.
+////
diff --git a/src/content/blog/2020/11/12/database-parsers-trees.adoc b/src/content/blog/2020/11/12/database-parsers-trees.adoc
index 1870fad..eed785b 100644
--- a/src/content/blog/2020/11/12/database-parsers-trees.adoc
+++ b/src/content/blog/2020/11/12/database-parsers-trees.adoc
@@ -1,99 +1,92 @@
= Durable persistent trees and parser combinators - building a database
-date: 2020-11-12
-
-updated_at: 2021-02-09
-
-layout: post
-
-lang: en
-
-ref: durable-persistent-trees-and-parser-combinators-building-a-database
-
-eu_categories: mediator
-
----
+:empty:
+:db-article: link:../../08/31/database-i-wish-i-had.html
I've received with certain frequency messages from people wanting to know if
-I've made any progress on the database project
-[I've written about]({% link _articles/2020-08-31-the-database-i-wish-i-had.md %}).
+I've made any progress on the database project {db-article}[I've written about].
There are a few areas where I've made progress, and here's a public post on it.
== Proof-of-concept: DAG log
+:mediator-permalink: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n1
+
The main thing I wanted to validate with a concrete implementation was the
concept of modeling a DAG on a sequence of datoms.
-The notion of a *datom* is a rip-off from Datomic, which models data with time
-aware *facts*, which come from RDF. RDF's fact is a triple of
+The notion of a _datom_ is a rip-off from Datomic, which models data with time
+aware _facts_, which come from RDF. RDF's fact is a triple of
subject-predicate-object, and Datomic's datoms add a time component to it:
subject-predicate-object-time, A.K.A. entity-attribute-value-transaction:
-```clojure
+[source,clojure]
+----
[[person :likes "pizza" 0 true]
[person :likes "bread" 1 true]
[person :likes "pizza" 1 false]]
-```
+----
-The above datoms say:
-- at time 0, `person` like pizza;
-- at time 1, `person` stopped liking pizza, and started to like bread.
+The above datoms say: - at time 0, `person` like pizza; - at time 1, `person`
+stopped liking pizza, and started to like bread.
Datomic ensures total consistency of this ever growing log by having a single
writer, the transactor, that will enforce it when writing.
In order to support disconnected clients, I needed a way to allow multiple
-writers, and I chose to do it by making the log not a list, but a
-directed acyclic graph (DAG):
+writers, and I chose to do it by making the log not a list, but a directed
+acyclic graph (DAG):
-```clojure
+[source,clojure]
+----
[[person :likes "pizza" 0 true]
[0 :parent :db/root 0 true]
[person :likes "bread" 1 true]
[person :likes "pizza" 1 false]
[1 :parent 0 1 true]]
-```
+----
The extra datoms above add more information to build the directionality to the
log, and instead of a single consistent log, the DAG could have multiple leaves
that coexist, much like how different Git branches can have different "latest"
commits.
-In order to validate this idea, I started with a Clojure implementation. The
+In order to validate this idea, I started with a Clojure implementation. The
goal was not to write the actual final code, but to make a proof-of-concept that
would allow me to test and stretch the idea itself.
-This code [already exists][clj-poc], but is yet fairly incomplete:
+This code {mediator-permalink}[already exists], but is yet fairly incomplete:
-- the building of the index isn't done yet (with some
- [commented code][clj-poc-index] on the next step to be implemented)
-- the indexing is extremely inefficient, with [more][clj-poc-o2-0]
- [than][clj-poc-o2-1] [one][clj-poc-o2-2] occurrence of `O²` functions;
-- no query support yet.
+:commented-code: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n295
+:more: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n130
+:than: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n146
+:one: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n253
-[clj-poc]: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n1
-[clj-poc-index]: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n295
-[clj-poc-o2-0]: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n130
-[clj-poc-o2-1]: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n146
-[clj-poc-o2-2]: https://euandre.org/git/mediator/tree/src/core/clojure/src/mediator.clj?id=db4a727bc24b54b50158827b34502de21dbf8948#n253
+* the building of the index isn't done yet (with some {commented-code}[commented
+ code] on the next step to be implemented)
+* the indexing is extremely inefficient, with {more}[more] {than}[than]
+ {one}[one] occurrence of `O²` functions;
+* no query support yet.
-== Top-down *and* bottom-up
+== Top-down _and_ bottom-up
However, as time passed and I started looking at what the final implementation
would look like, I started to consider keeping the PoC around.
The top-down approach (Clojure PoC) was in fact helping guide me with the
bottom-up, and I now have "promoted" the Clojure PoC into a "reference
-implementation". It should now be a finished implementation that says what the
+implementation". It should now be a finished implementation that says what the
expected behaviour is, and the actual code should match the behaviour.
The good thing about a reference implementation is that it has no performance of
-resources boundary, so if it ends up being 1000x slower and using 500× more
-memory, it should be find. The code can be also 10x or 100x simpler, too.
+resources boundary, so if it ends up being 1000× slower and using 500× more
+memory, it should be find. The code can be also 10× or 100× simpler, too.
== Top-down: durable persistent trees
+:pavlo-videos: https://www.youtube.com/playlist?list=PLSE8ODhjZXjbohkNBWQs_otTrBTrjyohi
+:db-book: https://www.databass.dev/
+
In promoting the PoC into a reference implementation, this top-down approach now
needs to go beyond doing everything in memory, and the index data structure now
needs to be disk-based.
@@ -102,115 +95,118 @@ Roughly speaking, most storage engines out there are based either on B-Trees or
LSM Trees, or some variations of those.
But when building an immutable database, update-in-place B-Trees aren't an
-option, as it doesn't accommodate keeping historical views of the tree. LSM Trees
-may seem a better alternative, but duplication on the files with compaction are
-also ways to delete old data which is indeed useful for a historical view.
+option, as it doesn't accommodate keeping historical views of the tree. LSM
+Trees may seem a better alternative, but duplication on the files with
+compaction are also ways to delete old data which is indeed useful for a
+historical view.
I think the thing I'm after is a mix of a Copy-on-Write B-Tree, which would keep
historical versions with the write IO cost amortization of memtables of LSM
-Trees. I don't know of any B-Tree variant out there that resembles this, so I'll
-call it "Flushing Copy-on-Write B-Tree".
+Trees. I don't know of any B-Tree variant out there that resembles this, so
+I'll call it "Flushing Copy-on-Write B-Tree".
I haven't written any code for this yet, so all I have is a high-level view of
what it will look like:
-1. like Copy-on-Write B-Trees, changing a leaf involves creating a new leaf and
- building a new path from root to the leaf. The upside is that writes a lock
+. like Copy-on-Write B-Trees, changing a leaf involves creating a new leaf and
+ building a new path from root to the leaf. The upside is that writes a lock
free, and no coordination is needed between readers and writers, ever;
-
-2. the downside is that a single leaf update means at least `H` new nodes that
- will have to be flushed to disk, where `H` is the height of the tree. To avoid
- that, the writer creates these nodes exclusively on the in-memory memtable, to
- avoid flushing to disk on every leaf update;
-
-3. a background job will consolidate the memtable data every time it hits X MB,
+. the downside is that a single leaf update means at least `H` new nodes that
+ will have to be flushed to disk, where `H` is the height of the tree. To
+ avoid that, the writer creates these nodes exclusively on the in-memory
+ memtable, to avoid flushing to disk on every leaf update;
+. a background job will consolidate the memtable data every time it hits X MB,
and persist it to disk, amortizing the cost of the Copy-on-Write B-Tree;
-
-4. readers than will have the extra job of getting the latest relevant
+. readers than will have the extra job of getting the latest relevant
disk-resident value and merge it with the memtable data.
-The key difference to existing Copy-on-Write B-Trees is that the new trees
-are only periodically written to disk, and the intermediate values are kept in
-memory. Since no node is ever updated, the page utilization is maximum as it
+The key difference to existing Copy-on-Write B-Trees is that the new trees are
+only periodically written to disk, and the intermediate values are kept in
+memory. Since no node is ever updated, the page utilization is maximum as it
doesn't need to keep space for future inserts and updates.
And the key difference to existing LSM Trees is that no compaction is run:
-intermediate values are still relevant as the database grows. So this leaves out
-tombstones and value duplication done for write performance.
+intermediate values are still relevant as the database grows. So this leaves
+out tombstones and value duplication done for write performance.
One can delete intermediate index values to reclaim space, but no data is lost
-on the process, only old B-Tree values. And if the database ever comes back to
+on the process, only old B-Tree values. And if the database ever comes back to
that point (like when doing a historical query), the B-Tree will have to be
-rebuilt from a previous value. After all, the database *is* a set of datoms, and
-everything else is just derived data.
+rebuilt from a previous value. After all, the database _is_ a set of datoms,
+and everything else is just derived data.
Right now I'm still reading about other data structures that storage engines
use, and I'll start implementing the "Flushing Copy-on-Write B-Tree" as I learn
-more[^learn-more-db] and mature it more.
-
-[^learn-more-db]: If you are interested in learning more about this too, the
- very best two resources on this subject are Andy Pavlo's
- "[Intro to Database Systems](https://www.youtube.com/playlist?list=PLSE8ODhjZXjbohkNBWQs_otTrBTrjyohi)"
- course and Alex Petrov's "[Database Internals](https://www.databass.dev/)" book.
+more{empty}footnote:learn-more-db[
+ If you are interested in learning more about this too, the very best two
+ resources on this subject are Andy Pavlo's "{pavlo-videos}[Intro to Database
+ Systems]" course and Alex Petrov's "{db-book}[Database Internals]" book.
+] and mature it more.
== Bottom-up: parser combinators and FFI
+:cbindgen: https://github.com/eqrion/cbindgen
+:cbindgen-next: https://blog.eqrion.net/future-directions-for-cbindgen/
+:syn-crate: https://github.com/dtolnay/syn
+:libedn: https://euandre.org/git/libedn/
+
I chose Rust as it has the best WebAssembly tooling support.
My goal is not to build a Rust database, but a database that happens to be in
-Rust. In order to reach client platforms, the primary API is the FFI one.
+Rust. In order to reach client platforms, the primary API is the FFI one.
I'm not very happy with current tools for exposing Rust code via FFI to the
-external world: they either mix C with C++, which I don't want to do, or provide
-no access to the intermediate representation of the FFI, which would be useful
-for generating binding for any language that speaks FFI.
+external world: they either mix C with C++, which I don't want to do, or
+provide no access to the intermediate representation of the FFI, which would be
+useful for generating binding for any language that speaks FFI.
-I like better the path that the author of [cbindgen][cbindgen-crate]
-crate [proposes][rust-ffi]: emitting an data representation of the Rust C API
+I like better the path that the author of {cbindgen}[cbindgen] crate
+{cbindgen-next}[proposes]: emitting an data representation of the Rust C API
(the author calls is a `ffi.json` file), and than building transformers from the
-data representation to the target language. This way you could generate a C API
-*and* the node-ffi bindings for JavaScript automatically from the Rust code.
+data representation to the target language. This way you could generate a C API
+_and_ the node-ffi bindings for JavaScript automatically from the Rust code.
So the first thing to be done before moving on is an FFI exporter that doesn't
mix C and C++, and generates said `ffi.json`, and than build a few transformers
that take this `ffi.json` and generate the language bindings, be it C, C++,
-JavaScript, TypeScript, Kotlin, Swift, Dart, *etc*[^ffi-langs].
-
-[^ffi-langs]: Those are, specifically, the languages I'm more interested on. My
- goal is supporting client applications, and those languages are the most
- relevant for doing so: C for GTK, C++ for Qt, JavaScript and TypeScript for
- Node.js and browser, Kotlin for Android and Swing, Swift for iOS, and Dart
- for Flutter.
+JavaScript, TypeScript, Kotlin, Swift, Dart,
+_etc_footnote:ffi-langs[
+ Those are, specifically, the languages I'm more interested on. My goal is
+ supporting client applications, and those languages are the most relevant for
+ doing so: C for GTK, C++ for Qt, JavaScript and TypeScript for Node.js and
+ browser, Kotlin for Android and Swing, Swift for iOS, and Dart for Flutter.
+].
I think the best way to get there is by taking the existing code for cbindgen,
-which uses the [syn][syn-crate] crate to parse the Rust code[^rust-syn], and
-adapt it to emit the metadata.
-
-[^rust-syn]: The fact that syn is an external crate to the Rust compiler points
- to a big warning: procedural macros are not first class in Rust. They are
- just like Babel plugins in JavaScript land, with the extra shortcoming that
- there is no specification for the Rust syntax, unlike JavaScript.
-
- As flawed as this may be, it seems to be generally acceptable and adopted,
- which works against building a solid ecosystem for Rust.
-
- The alternative that rust-ffi implements relies on internals of the Rust
- compiler, which isn't actually worst, just less common and less accepted.
-
-I've started a fork of cbindgen: ~~x-bindgen~~[^x-bindgen]. Right now it is
-just a copy of cbindgen verbatim, and I plan to remove all C and C++ emitting
-code from it, and add a IR emitting code instead.
-
-[^x-bindgen]: *EDIT*: now archived, the experimentation was fun. I've started to move more towards C, so this effort became deprecated.
+which uses the {syn-crate}[syn] crate to parse the Rust
+code{empty}footnote:rust-syn[
+ The fact that syn is an external crate to the Rust compiler points to a big
+ warning: procedural macros are not first class in Rust. They are just like
+ Babel plugins in JavaScript land, with the extra shortcoming that there is no
+ specification for the Rust syntax, unlike JavaScript.
+FIXME
+ As flawed as this may be, it seems to be generally acceptable and adopted,
+ which works against building a solid ecosystem for Rust.
+FIXME
+ The alternative that rust-ffi implements relies on internals of the Rust
+ compiler, which isn't actually worst, just less common and less accepted.
+], and adapt it to emit the metadata.
+
+I've started a fork of cbindgen:
+[line-through]#x-bindgen#{empty}footnote:x-bindgen[
+ _EDIT_: now archived, the experimentation was fun. I've started to move more
+ towards C, so this effort became deprecated.
+]. Right now it is just a copy of cbindgen verbatim, and I plan to remove all C
+and C++ emitting code from it, and add a IR emitting code instead.
When starting working on x-bindgen, I realized I didn't know what to look for in
-a header file, as I haven't written any C code in many years. So as I was
-writing [libedn][libedn-repo], I didn't know how to build a good C API to
-expose. So I tried porting the code to C, and right now I'm working on building
-a *good* C API for a JSON parser using parser combinators:
-~~ParsecC~~ [^parsecc].
-
-[^parsecc]: *EDIT*: now also archived.
+a header file, as I haven't written any C code in many years. So as I was
+writing {libedn}[libedn], I didn't know how to build a good C API to expose. So
+I tried porting the code to C, and right now I'm working on building a _good_ C
+API for a JSON parser using parser combinators:
+[line-through]#ParsecC#{empty}footnote:parsecc[
+ _EDIT_: now also archived.
+].
After "finishing" ParsecC I'll have a good notion of what a good C API is, and
I'll have a better direction towards how to expose code from libedn to other
@@ -219,11 +215,6 @@ languages, and work on x-bindgen then.
What both libedn and ParsecC are missing right now are proper error reporting,
and property-based testing for libedn.
-[cbindgen-crate]: https://github.com/eqrion/cbindgen
-[syn-crate]: https://github.com/dtolnay/syn
-[rust-ffi]: https://blog.eqrion.net/future-directions-for-cbindgen/
-[libedn-repo]: https://euandre.org/git/libedn/
-
== Conclusion
I've learned a lot already, and I feel the journey I'm on is worth going
diff --git a/src/content/blog/2020/11/14/local-first-review.adoc b/src/content/blog/2020/11/14/local-first-review.adoc
index c24095a..0dd3bea 100644
--- a/src/content/blog/2020/11/14/local-first-review.adoc
+++ b/src/content/blog/2020/11/14/local-first-review.adoc
@@ -1,23 +1,15 @@
= Local-First Software: You Own Your Data, in spite of the Cloud - article review
-date: 2020-11-14
+:empty:
+:presentation: link:../../../../slides/2020/11/14/local-first.html
+:reviewed-article: https://martin.kleppmann.com/papers/local-first.pdf
-layout: post
+_This article is derived from a {presentation}[presentation] given at a Papers
+We Love meetup on the same subject._
-lang: en
-
-ref: local-first-software-you-own-your-data-in-spite-of-the-cloud-article-review
-
-eu_categories: presentation,article review
-
----
-
-*This article is derived from a [presentation][presentation] given at a Papers
-We Love meetup on the same subject.*
-
-This is a review of the article
-"[Local-First Software: You Own Your Data, in spite of the Cloud][article-pdf]",
-by M. Kleppmann, A. Wiggins, P. Van Hardenberg and M. F. McGranaghan.
+This is a review of the article "{reviewed-article}[Local-First Software: You
+Own Your Data, in spite of the Cloud]", by M. Kleppmann, A. Wiggins, P. Van
+Hardenberg and M. F. McGranaghan.
== Offline-first, local-first
@@ -27,34 +19,34 @@ client, and there are conflict resolution algorithms that reconcile data created
on different instances.
Sometimes I see confusion with this idea and "client-side", "offline-friendly",
-"syncable", etc. I have myself used this terms, also.
+"syncable", etc. I have myself used this terms, also.
There exists, however, already the "offline-first" term, which conveys almost
-all of that meaning. In my view, "local-first" doesn't extend "offline-first" in
-any aspect, rather it gives a well-defined meaning to it instead. I could say
-that "local-first" is just "offline-first", but with 7 well-defined ideals
+all of that meaning. In my view, "local-first" doesn't extend "offline-first"
+in any aspect, rather it gives a well-defined meaning to it instead. I could
+say that "local-first" is just "offline-first", but with 7 well-defined ideals
instead of community best practices.
It is a step forward, and given the number of times I've seen the paper shared
around I think there's a chance people will prefer saying "local-first" in
-*lieu* of "offline-first" from now on.
-
-[presentation]: {% link _slides/2020-11-14-on-local-first-beyond-the-crdt-silver-bullet.slides %}
-[article-pdf]: https://martin.kleppmann.com/papers/local-first.pdf
+_lieu_ of "offline-first" from now on.
== Software licenses
On a footnote of the 7th ideal ("You Retain Ultimate Ownership and Control"),
the authors say:
-> In our opinion, maintaining control and ownership of data does not mean that
-> the software must necessarily be open source. (...) as long as it does not
-> artificially restrict what users can do with their files.
+____
+In our opinion, maintaining control and ownership of data does not mean that the
+software must necessarily be open source. (...) as long as it does not
+artificially restrict what users can do with their files.
+____
They give examples of artificial restrictions, like this artificial restriction
I've come up with:
-```bash
+[source,bash]
+----
#!/bin/sh
TODAY=$(date +%s)
@@ -66,23 +58,24 @@ if [ $TODAY -ge $LICENSE_EXPIRATION ]; then
fi
echo $((2 + 2))
-```
+----
Now when using this very useful program:
-```bash
+[source,bash]
+----
# today
$ ./useful-adder.sh
4
# tomorrow
$ ./useful-adder.sh
License expired!
-```
+----
This is obviously an intentional restriction, and it goes against the 5th ideal
-("The Long Now"). This software would only be useful as long as the embedded
-license expiration allowed. Sure you could change the clock on the computer, but
-there are many other ways that this type of intentional restriction is in
+("The Long Now"). This software would only be useful as long as the embedded
+license expiration allowed. Sure you could change the clock on the computer,
+but there are many other ways that this type of intentional restriction is in
conflict with that ideal.
However, what about unintentional restrictions? What if a software had an equal
@@ -90,7 +83,8 @@ or similar restriction, and stopped working after days pass? Or what if the
programmer added a constant to make the development simpler, and this led to
unintentionally restricting the user?
-```bash
+[source,bash]
+----
# today
$ useful-program
# ...useful output...
@@ -98,81 +92,86 @@ $ useful-program
# tomorrow, with more data
$ useful-program
ERROR: Panic! Stack overflow!
-```
+----
Just as easily as I can come up with ways to intentionally restrict users, I can
-do the same for unintentionally restrictions. A program can stop working for a
+do the same for unintentionally restrictions. A program can stop working for a
variety of reasons.
-If it stops working due do, say, data growth, what are the options? Reverting to
-an earlier backup, and making it read-only? That isn't really a "Long Now", but
-rather a "Long Now as long as the software keeps working as expected".
+If it stops working due do, say, data growth, what are the options? Reverting
+to an earlier backup, and making it read-only? That isn't really a "Long Now",
+but rather a "Long Now as long as the software keeps working as expected".
The point is: if the software isn't free, "The Long Now" isn't achievable
-without a lot of wishful thinking. Maybe the authors were trying to be more
-friendly towards business who don't like free software, but in doing so they've proposed
-a contradiction by reconciling "The Long Now" with proprietary software.
+without a lot of wishful thinking. Maybe the authors were trying to be more
+friendly towards business who don't like free software, but in doing so they've
+proposed a contradiction by reconciling "The Long Now" with proprietary
+software.
-It isn't the same as saying that any free software achieves that ideal,
-either. The license can still be free, but the source code can become
-unavailable due to cloud rot. Or maybe the build is undocumented, or the build
-tools had specific configuration that one has to guess. A piece of free
-software can still fail to achieve "The Long Now". Being free doesn't guarantee
-it, just makes it possible.
+It isn't the same as saying that any free software achieves that ideal, either.
+The license can still be free, but the source code can become unavailable due to
+cloud rot. Or maybe the build is undocumented, or the build tools had specific
+configuration that one has to guess. A piece of free software can still fail to
+achieve "The Long Now". Being free doesn't guarantee it, just makes it
+possible.
A colleague has challenged my view, arguing that the software doesn't really
-need to be free, as long as there is an specification of the file format. This
+need to be free, as long as there is an specification of the file format. This
way if the software stops working, the format can still be processed by other
-programs. But this doesn't apply in practice: if you have a document that you
+programs. But this doesn't apply in practice: if you have a document that you
write to, and software stops working, you still want to write to the document.
An external tool that navigates the content and shows it to you won't allow you
to keep writing, and when it does that tool is now starting to re-implement the
software.
An open specification could serve as a blueprint to other implementations,
-making the data format more friendly to reverse-engineering. But the
-re-implementation still has to exist, at which point the original software failed
-to achieve "The Long Now".
+making the data format more friendly to reverse-engineering. But the
+re-implementation still has to exist, at which point the original software
+failed to achieve "The Long Now".
It is less bad, but still not quite there yet.
== Denial of existing solutions
-When describing "Existing Data Storage and Sharing Models", on a
-footnote[^devil] the authors say:
+:distgit: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
-[^devil]: This is the second aspect that I'm picking on the article from a
- footnote. I guess the devil really is on the details.
+When describing "Existing Data Storage and Sharing Models", on a
+footnote{empty}footnote:devil[
+ This is the second aspect that I'm picking on the article from a footnote. I
+ guess the devil really is on the details.
+] the authors say:
-> In principle it is possible to collaborate without a repository service,
-> e.g. by sending patch files by email, but the majority of Git users rely
-> on GitHub.
+____
+In principle it is possible to collaborate without a repository service, e.g. by
+sending patch files by email, but the majority of Git users rely on GitHub.
+____
The authors go to a great length to talk about usability of cloud apps, and even
point to research they've done on it, but they've missed learning more from
local-first solutions that already exist.
Say the automerge CRDT proves to be even more useful than what everybody
-imagined. Say someone builds a local-first repository service using it. How will
-it change anything of the Git/GitHub model? What is different about it that
+imagined. Say someone builds a local-first repository service using it. How
+will it change anything of the Git/GitHub model? What is different about it that
prevents people in the future writing a paper saying:
-> In principle it is possible to collaborate without a repository service,
-> e.g. by using automerge and platform X,
-> but the majority of Git users rely on GitHub.
+____
+In principle it is possible to collaborate without a repository service, e.g. by
+using automerge and platform X, but the majority of Git users rely on GitHub.
+____
How is this any better?
-If it is already [possible][git-local-first] to have a local-first development
-workflow, why don't people use it? Is it just fashion, or there's a fundamental
-problem with it? If so, what is it, and how to avoid it?
+If it is already {distgit}[possible] to have a local-first development workflow,
+why don't people use it? Is it just fashion, or there's a fundamental problem
+with it? If so, what is it, and how to avoid it?
If sending patches by emails is perfectly possible but out of fashion, why even
-talk about Git/GitHub? Isn't this a problem that people are putting themselves
-in? How can CRDTs possibly prevent people from doing that?
+talk about Git/GitHub? Isn't this a problem that people are putting themselves
+in? How can CRDTs possibly prevent people from doing that?
My impression is that the authors envision a better future, where development is
-fully decentralized unlike today, and somehow CRDTs will make that happen. If
+fully decentralized unlike today, and somehow CRDTs will make that happen. If
more people think this way, "CRDT" is next in line to the buzzword list that
solves everything, like "containers", "blockchain" or "machine learning".
@@ -180,56 +179,56 @@ Rather than picturing an imaginary service that could be described like
"GitHub+CRDTs" and people would adopt it, I'd rather better understand why
people don't do it already, since Git is built to work like that.
-[git-local-first]: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
-
== Ditching of web applications
+:pouchdb: https://pouchdb.com/
+:instant-apps: https://developer.android.com/topic/google-play-instant
+
The authors put web application in a worse position for building local-first
application, claiming that:
-> (...) the architecture of web apps remains fundamentally server-centric.
-> Offline support is an afterthought in most web apps, and the result is
-> accordingly fragile.
+____
+(...) the architecture of web apps remains fundamentally server-centric.
+Offline support is an afterthought in most web apps, and the result is
+accordingly fragile.
+____
Well, I disagree.
The problem isn't inherit to the web platform, but instead how people use it.
-I have myself built offline-first applications, leveraging IndexedDB, App Cache,
-*etc*. I wanted to build an offline-first application on the web, and so I did.
+I have myself built offline-first applications, leveraging IndexedDB, App Cache, _etc_. I wanted to build an offline-first application on the web, and so I
+did.
-In fact, many people choose [PouchDB][pouchdb] *because* of that, since it is a
-good tool for offline-first web applications. The problem isn't really the
+In fact, many people choose {pouchdb}[PouchDB] _because_ of that, since it is a
+good tool for offline-first web applications. The problem isn't really the
technology, but how much people want their application to be local-first.
-Contrast it with Android [Instant Apps][instant-apps], where applications are
-sent to the phone in small parts. Since this requires an internet connection to
+Contrast it with Android {instant-apps}[Instant Apps], where applications are
+sent to the phone in small parts. Since this requires an internet connection to
move from a part of the app bundle to another, a subset of the app isn't
local-first, despite being an app.
-The point isn't the technology, but how people are using it. Local-first web
+The point isn't the technology, but how people are using it. Local-first web
applications are perfectly possible, just like non-local-first native
applications are possible.
-[pouchdb]: https://pouchdb.com/
-[instant-apps]: https://developer.android.com/topic/google-play-instant
-
== Costs are underrated
I think the costs of "old-fashioned apps" over "cloud apps" are underrated,
mainly regarding storage, and that this costs can vary a lot by application.
Say a person writes online articles for their personal website, and puts
-everything into Git. Since there isn't supposed to be any collaboration, all
-of the relevant ideals of local-first are achieved.
+everything into Git. Since there isn't supposed to be any collaboration, all of
+the relevant ideals of local-first are achieved.
-Now another person creates videos instead of articles. They could try keeping
+Now another person creates videos instead of articles. They could try keeping
everything local, but after some time the storage usage fills the entire disk.
This person's local-first setup would be much more complex, and would cost much
more on maintenance, backup and storage.
Even though both have similar needs, a local-first video repository is much more
-demanding. So the local-first thinking here isn't "just keep everything local",
+demanding. So the local-first thinking here isn't "just keep everything local",
but "how much time and money am I willing to spend to keep everything local".
The convenience of "cloud apps" becomes so attractive that many don't even have
@@ -245,22 +244,22 @@ If I were the one making the list of ideals, I wouldn't focus so much on
real-time collaboration.
Even though seamless collaboration is desired, it being real-time depends on the
-network being available for that. But ideal 3 states that
-"The Network is Optional", so real-time collaboration is also optional.
+network being available for that. But ideal 3 states that "The Network is
+Optional", so real-time collaboration is also optional.
The fundamentals of a local-first system should enable real-time collaboration
when network is available, but shouldn't focus on it.
On many places when discussing applications being offline, it is common for me
-to find people saying that their application works
-"even on a plane, subway or elevator". That is a reflection of when said
-developers have to deal with networks being unavailable.
+to find people saying that their application works "even on a plane, subway or
+elevator". That is a reflection of when said developers have to deal with
+networks being unavailable.
But this leaves out a big chunk of the world where internet connection is
intermittent, or only works every other day or only once a week, or stops
-working when it rains, *etc*. For this audience, living without network
-connectivity isn't such a discrete moment in time, but part of every day life. I
-like the fact that the authors acknowledge that.
+working when it rains, _etc_. For this audience, living without network
+connectivity isn't such a discrete moment in time, but part of every day life.
+I like the fact that the authors acknowledge that.
When discussing "working offline", I'd rather keep this type of person in mind,
then the subset of people who are offline when on the elevator will naturally be
@@ -268,31 +267,32 @@ included.
== On CRDTs and developer experience
+:archived-article: https://web.archive.org/web/20130116163535/https://labs.oracle.com/techrep/1994/smli_tr-94-29.pdf
+
When discussing developer experience, the authors bring up some questions to be
answered further, like:
-> For an app developer, how does the use of a CRDT-based data layer compare to
-> existing storage layers like a SQL database, a filesystem, or CoreData? Is a
-> distributed system harder to write software for?
+____
+For an app developer, how does the use of a CRDT-based data layer compare to
+existing storage layers like a SQL database, a filesystem, or CoreData? Is a
+distributed system harder to write software for?
+____
That is an easy one: yes.
-A distributed system *is* harder to write software for, being a distributed
+A distributed system _is_ harder to write software for, being a distributed
system.
Adding a large layer of data structures and algorithms will make it more complex
-to write software for, naturally. And if trying to make this layer transparent
+to write software for, naturally. And if trying to make this layer transparent
to the programmer, so they can pretend that layer doesn't exist is a bad idea,
as RPC frameworks have tried, and failed.
-See "[A Note on Distributed Computing][note-dist-comp]" for a critique on RPC
+See "{archived-article}[A Note on Distributed Computing]" for a critique on RPC
frameworks trying to make the network invisible, which I think also applies in
equivalence for making the CRDTs layer invisible.
-[rmi-wiki]: https://en.wikipedia.org/wiki/Java_remote_method_invocation
-[note-dist-comp]: https://web.archive.org/web/20130116163535/http://labs.oracle.com/techrep/1994/smli_tr-94-29.pdf
-
-## Conclusion
+== Conclusion
I liked a lot the article, as it took the "offline-first" philosophy and ran
with it.
diff --git a/src/content/blog/2021/01/26/remembering-ann.adoc b/src/content/blog/2021/01/26/remembering-ann.adoc
index 0d02384..5b7d2b0 100644
--- a/src/content/blog/2021/01/26/remembering-ann.adoc
+++ b/src/content/blog/2021/01/26/remembering-ann.adoc
@@ -1,55 +1,60 @@
----
+= ANN: remembering - Add memory to dmenu, fzf and similar tools
-title: "ANN: remembering - Add memory to dmenu, fzf and similar tools"
+:remembering: https://euandreh.xyz/remembering/
+:dmenu: https://tools.suckless.org/dmenu/
+:fzf: https://github.com/junegunn/fzf
-date: 2021-01-26
+Today I pushed v0.1.0 of {remembering}[remembering], a tool to enhance the
+interactive usability of menu-like tools, such as {dmenu}[dmenu] and {fzf}[fzf].
-layout: post
+== Previous solution
-lang: en
+:yeganesh: https://dmwit.com/yeganesh/
-ref: ann-remembering-add-memory-to-dmenu-fzf-and-similar-tools
+I previously used {yeganesh}[yeganesh] to fill this gap, but as I started to
+rely less on Emacs, I added fzf as my go-to tool for doing fuzzy searching on
+the terminal. But I didn't like that fzf always showed the same order of
+things, when I would only need 3 or 4 commonly used files.
----
+For those who don't know: yeganesh is a wrapper around dmenu that will remember
+your most used programs and put them on the beginning of the list of
+executables. This is very convenient for interactive prolonged use, as with
+time the things you usually want are right at the very beginning.
-Today I pushed v0.1.0 of [remembering], a tool to enhance the interactive usability of menu-like tools, such as [dmenu] and [fzf].
+But now I had this thing, yeganesh, that solved this problem for dmenu, but
+didn't for fzf.
-## Previous solution
+I initially considered patching yeganesh to support it, but I found it more
+coupled to dmenu than I would desire. I'd rather have something that knows
+nothing about dmenu, fzf or anything, but enhances tools like those in a useful
+way.
-I previously used [yeganesh] to fill this gap, but as I started to rely less on Emacs, I added fzf as my go-to tool for doing fuzzy searching on the terminal.
-But I didn't like that fzf always showed the same order of things, when I would only need 3 or 4 commonly used files.
+== Implementation
-For those who don't know: yeganesh is a wrapper around dmenu that will remember your most used programs and put them on the beginning of the list of executables.
-This is very convenient for interactive prolonged use, as with time the things you usually want are right at the very beginning.
+:v-010: https://euandre.org/git/remembering/tree/remembering?id=v0.1.0
+:getopts: https://www.opengroup.org/onlinepubs/9699919799/utilities/getopts.html
+:sort: https://www.opengroup.org/onlinepubs/9699919799/utilities/sort.html
+:awk: https://www.opengroup.org/onlinepubs/9699919799/utilities/awk.html
+:spencer-quote: https://en.wikipedia.org/wiki/Henry_Spencer#cite_note-3
-But now I had this thing, yeganesh, that solved this problem for dmenu, but didn't for fzf.
+Other than being decoupled from dmenu, another improvement I though that could
+be made on top of yeganesh is the programming language choice. Instead of
+Haskell, I went with POSIX sh. Sticking to POSIX sh makes it require less
+build-time dependencies. There aren't any, actually. Packaging is made much
+easier due to that.
-I initially considered patching yeganesh to support it, but I found it more coupled to dmenu than I would desire.
-I'd rather have something that knows nothing about dmenu, fzf or anything, but enhances tools like those in a useful way.
+The good thing is that the program itself is small enough ({v-010}[119 lines] on
+v0.1.0) that POSIX sh does the job just fine, combined with other POSIX
+utilities such as {getopts}[getopts], {sort}[sort] and {awk}[awk].
-[remembering]: https://euandreh.xyz/remembering/
-[dmenu]: https://tools.suckless.org/dmenu/
-[fzf]: https://github.com/junegunn/fzf
-[yeganesh]: http://dmwit.com/yeganesh/
-
-## Implementation
-
-Other than being decoupled from dmenu, another improvement I though that could be made on top of yeganesh is the programming language choice.
-Instead of Haskell, I went with POSIX sh.
-Sticking to POSIX sh makes it require less build-time dependencies. There aren't any, actually. Packaging is made much easier due to that.
-
-The good thing is that the program itself is small enough ([119 lines] on v0.1.0) that POSIX sh does the job just fine, combined with other POSIX utilities such as [getopts], [sort] and [awk].
-
-[119 lines]: https://euandre.org/git/remembering/tree/remembering?id=v0.1.0
-[getopts]: http://www.opengroup.org/onlinepubs/9699919799/utilities/getopts.html
-[sort]: http://www.opengroup.org/onlinepubs/9699919799/utilities/sort.html
-[awk]: http://www.opengroup.org/onlinepubs/9699919799/utilities/awk.html
-
-The behaviour is: given a program that will read from STDIN and write a single entry to STDOUT, `remembering` wraps that program, and rearranges STDIN so that previous choices appear at the beginning.
+The behaviour is: given a program that will read from STDIN and write a single
+entry to STDOUT, `remembering` wraps that program, and rearranges STDIN so that
+previous choices appear at the beginning.
Where you would do:
-```shell
+[source,shell]
+----
$ seq 5 | fzf
5
@@ -59,11 +64,12 @@ $ seq 5 | fzf
> 1
5/5
>
-```
+----
And every time get the same order of numbers, now you can write:
-```shell
+[source,shell]
+----
$ seq 5 | remembering -p seq-fzf -c fzf
5
@@ -73,11 +79,13 @@ $ seq 5 | remembering -p seq-fzf -c fzf
> 1
5/5
>
-```
+----
-On the first run, everything is the same. If you picked 4 on the previous example, the following run would be different:
+On the first run, everything is the same. If you picked 4 on the previous
+example, the following run would be different:
-```shell
+[source,shell]
+----
$ seq 5 | remembering -p seq-fzf -c fzf
5
@@ -87,31 +95,36 @@ $ seq 5 | remembering -p seq-fzf -c fzf
> 4
5/5
>
-```
+----
As time passes, the list would adjust based on the frequency of your choices.
-I aimed for reusability, so that I could wrap diverse commands with `remembering` and it would be able to work. To accomplish that, a "profile" (the `-p something` part) stores data about different runs separately.
-
-I took the idea of building something small with few dependencies to other places too:
-- the manpages are written in troff directly;
-- the tests are just more POSIX sh files;
-- and a POSIX Makefile to `check` and `install`.
+I aimed for reusability, so that I could wrap diverse commands with
+`remembering` and it would be able to work. To accomplish that, a "profile"
+(the `-p something` part) stores data about different runs separately.
-I was aware of the value of sticking to coding to standards, but I had past experience mostly with programming language standards, such as ECMAScript, Common Lisp, Scheme, or with IndexedDB or DOM APIs.
-It felt good to rediscover these nice POSIX tools, which makes me remember of a quote by [Henry Spencer][poor-unix]:
+I took the idea of building something small with few dependencies to other
+places too: - the manpages are written in troff directly; - the tests are just
+more POSIX sh files; - and a POSIX Makefile to `check` and `install`.
-> Those who do not understand Unix are condemned to reinvent it, poorly.
+I was aware of the value of sticking to coding to standards, but I had past
+experience mostly with programming language standards, such as ECMAScript,
+Common Lisp, Scheme, or with IndexedDB or DOM APIs. It felt good to rediscover
+these nice POSIX tools, which makes me remember of a quote by
+{spencer-quote}[Henry Spencer]:
-[poor-unix]: https://en.wikipedia.org/wiki/Henry_Spencer#cite_note-3
+____
+Those who do not understand Unix are condemned to reinvent it, poorly.
+____
-## Usage examples
+== Usage examples
Here are some functions I wrote myself that you may find useful:
-### Run a command with fzf on `$PWD`
+=== Run a command with fzf on `$PWD`
-```shellcheck
+[source,shellcheck]
+----
f() {
profile="$f-shell-function(pwd | sed -e 's_/_-_g')"
file="$(git ls-files | \
@@ -124,14 +137,18 @@ f() {
"$1" "$file"
fi
}
-```
+----
-This way I can run `f vi` or `f vi config` at the root of a repository, and the list of files will always appear on the most used order.
-Adding `pwd` to the profile allows it to not mix data for different repositories.
+This way I can run `f vi` or `f vi config` at the root of a repository, and the
+list of files will always appear on the most used order. Adding `pwd` to the
+profile allows it to not mix data for different repositories.
-### Copy password to clipboard
+=== Copy password to clipboard
-```shell
+:pass: https://www.passwordstore.org/
+
+[source,shell]
+----
choice="$(find "$HOME/.password-store" -type f | \
grep -Ev '(.git|.gpg-id)' | \
sed -e "s|$HOME/.password-store/||" -e 's/\.gpg$//' | \
@@ -142,49 +159,57 @@ choice="$(find "$HOME/.password-store" -type f | \
if [ -n "$choice" ]; then
pass show "$choice" -c
fi
-```
-
-Adding the above to a file and binding it to a keyboard shortcut, I can access the contents of my [password store][password-store], with the entries ordered by usage.
+----
-[password-store]: https://www.passwordstore.org/
+Adding the above to a file and binding it to a keyboard shortcut, I can access
+the contents of my {pass}[password store], with the entries ordered by usage.
-### Replacing yeganesh
+=== Replacing yeganesh
Where I previously had:
-```shell
+[source,shell]
+----
exe=$(yeganesh -x) && exec $exe
-```
+----
Now I have:
-```shell
+[source,shell]
+----
exe=$(dmenu_path | remembering -p dmenu-exec -c dmenu) && exec $exe
-```
+----
This way, the executables appear on order of usage.
-If you don't have `dmenu_path`, you can get just the underlying `stest` tool that looks at the executables available in your `$PATH`. Here's a juicy one-liner to do it:
+If you don't have `dmenu_path`, you can get just the underlying `stest` tool
+that looks at the executables available in your `$PATH`. Here's a juicy
+one-liner to do it:
-```shell
+[source,shell]
+----
$ wget -O- https://dl.suckless.org/tools/dmenu-5.0.tar.gz | \
tar Ozxf - dmenu-5.0/arg.h dmenu-5.0/stest.c | \
sed 's|^#include "arg.h"$|// #include "arg.h"|' | \
cc -xc - -o stest
-```
+----
+
+With the `stest` utility you'll be able to list executables in your `$PATH` and
+pipe them to dmenu or something else yourself:
-With the `stest` utility you'll be able to list executables in your `$PATH` and pipe them to dmenu or something else yourself:
-```shell
+[source,shell]
+----
$ (IFS=:; ./stest -flx $PATH;) | sort -u | remembering -p another-dmenu-exec -c dmenu | sh
-```
+----
In fact, the code for `dmenu_path` is almost just like that.
-## Conclusion
+== Conclusion
-For my personal use, I've [packaged] `remembering` for GNU Guix and Nix. Packaging it to any other distribution should be trivial, or just downloading the tarball and running `[sudo] make install`.
+:packaged: https://euandre.org/git/package-repository/
-Patches welcome!
+For my personal use, I've {packaged}[packaged] `remembering` for GNU Guix and
+Nix. Packaging it to any other distribution should be trivial, or just
+downloading the tarball and running `[sudo] make install`.
-[packaged]: https://euandre.org/git/package-repository/
-[nix-file]: https://euandre.org/git/dotfiles/tree/nixos/not-on-nixpkgs/remembering.nix?id=0831444f745cf908e940407c3e00a61f6152961f
+Patches welcome!
diff --git a/src/content/blog/2021/02/17/fallible.adoc b/src/content/blog/2021/02/17/fallible.adoc
index 8a097f8..533e107 100644
--- a/src/content/blog/2021/02/17/fallible.adoc
+++ b/src/content/blog/2021/02/17/fallible.adoc
@@ -1,49 +1,51 @@
= ANN: fallible - Fault injection library for stress-testing failure scenarios
-date: 2021-02-17
+:fallible: https://euandreh.xyz/fallible/
-updated_at: 2022-03-06
+Yesterday I pushed v0.1.0 of {fallible}[fallible], a miniscule library for
+fault-injection and stress-testing C programs.
-layout: post
+== _EDIT_
-lang: en
+:changelog: https://euandreh.xyz/fallible/CHANGELOG.html
+:tarball: https://euandre.org/static/attachments/fallible.tar.gz
-ref: ann-fallible-fault-injection-library-for-stress-testing-failure-scenarios
+2021-06-12: As of {changelog}[0.3.0] (and beyond), the macro interface improved
+and is a bit different from what is presented in this article. If you're
+interested, I encourage you to take a look at it.
----
+2022-03-06: I've {tarball}[archived] the project for now. It still needs some
+maturing before being usable.
-Yesterday I pushed v0.1.0 of [fallible], a miniscule library for fault-injection
-and stress-testing C programs.
+== Existing solutions
-[fallible]: https://euandreh.xyz/fallible/
+:gnu-std: https://www.gnu.org/prep/standards/standards.html#Semantics
+:valgrind: https://www.valgrind.org/
+:so-alloc: https://stackoverflow.com/questions/1711170/unit-testing-for-failed-malloc
-## *EDIT*
+Writing robust code can be challenging, and tools like static analyzers, fuzzers
+and friends can help you get there with more certainty. As I would try to
+improve some of my C code and make it more robust, in order to handle system
+crashes, filled disks, out-of-memory and similar scenarios, I didn't find
+existing tooling to help me get there as I expected to find. I couldn't find
+existing tools to help me explicitly stress-test those failure scenarios.
-2021-06-12: As of [0.3.0] (and beyond), the macro interface improved and is a bit different from what is presented in this article. If you're interested, I encourage you to take a look at it.
+Take the "{gnu-std}[Writing Robust Programs]" section of the GNU Coding
+Standards:
-2022-03-06: I've [archived] the project for now. It still needs some maturing before being usable.
+____
+Check every system call for an error return, unless you know you wish to ignore
+errors. (...) Check every call to malloc or realloc to see if it returned NULL.
+____
-[0.3.0]: https://euandreh.xyz/fallible/CHANGELOG.html
-[archived]: https://euandre.org/static/attachments/fallible.tar.gz
-
-## Existing solutions
-
-Writing robust code can be challenging, and tools like static analyzers, fuzzers and friends can help you get there with more certainty.
-As I would try to improve some of my C code and make it more robust, in order to handle system crashes, filled disks, out-of-memory and similar scenarios, I didn't find existing tooling to help me get there as I expected to find.
-I couldn't find existing tools to help me explicitly stress-test those failure scenarios.
-
-Take the "[Writing Robust Programs][gnu-std]" section of the GNU Coding Standards:
-
-[gnu-std]: https://www.gnu.org/prep/standards/standards.html#Semantics
-
-> Check every system call for an error return, unless you know you wish to ignore errors.
-> (...) Check every call to malloc or realloc to see if it returned NULL.
-
-From a robustness standpoint, this is a reasonable stance: if you want to have a robust program that knows how to fail when you're out of memory and `malloc` returns `NULL`, than you ought to check every call to `malloc`.
+From a robustness standpoint, this is a reasonable stance: if you want to have a
+robust program that knows how to fail when you're out of memory and `malloc`
+returns `NULL`, than you ought to check every call to `malloc`.
Take a sample code snippet for clarity:
-```c
+[source,c]
+----
void a_function() {
char *s1 = malloc(A_NUMBER);
strcpy(s1, "some string");
@@ -51,13 +53,15 @@ void a_function() {
char *s2 = malloc(A_NUMBER);
strcpy(s2, "another string");
}
-```
+----
-At a first glance, this code is unsafe: if any of the calls to `malloc` returns `NULL`, `strcpy` will be given a `NULL` pointer.
+At a first glance, this code is unsafe: if any of the calls to `malloc` returns
+`NULL`, `strcpy` will be given a `NULL` pointer.
My first instinct was to change this code to something like this:
-```diff
+[source,diff]
+----
@@ -1,7 +1,15 @@
void a_function() {
char *s1 = malloc(A_NUMBER);
@@ -74,22 +78,26 @@ My first instinct was to change this code to something like this:
+ }
strcpy(s2, "another string");
}
-```
+----
As I later found out, there are at least 2 problems with this approach:
-1. **it doesn't compose**: this could arguably work if `a_function` was `main`.
- But if `a_function` lives inside a library, an `exit(1);` is a inelegant way of handling failures, and will catch the top-level `main` consuming the library by surprise;
-2. **it gives up instead of handling failures**: the actual handling goes a bit beyond stopping.
- What about open file handles, in-memory caches, unflushed bytes, etc.?
-
-If you could force only the second call to `malloc` to fail, [Valgrind] would correctly complain that the program exitted with unfreed memory.
+. *it doesn't compose*: this could arguably work if `a_function` was `main`.
+ But if `a_function` lives inside a library, an `exit(1);` is an inelegant way
+ of handling failures, and will catch the top-level `main` consuming the
+ library by surprise;
+. *it gives up instead of handling failures*: the actual handling goes a bit
+ beyond stopping. What about open file handles, in-memory caches, unflushed
+ bytes, etc.?
-[Valgrind]: https://www.valgrind.org/
+If you could force only the second call to `malloc` to fail,
+{valgrind}[Valgrind] would correctly complain that the program exitted with
+unfreed memory.
So the last change to make the best version of the above code is:
-```diff
+[source,diff]
+----
@@ -1,15 +1,14 @@
-void a_function() {
+bool a_function() {
@@ -110,40 +118,61 @@ So the last change to make the best version of the above code is:
}
strcpy(s2, "another string");
}
-```
+----
-Instead of returning `void`, `a_function` now returns `bool` to indicate whether an error ocurred during its execution.
-If `a_function` returned a pointer to something, the return value could be `NULL`, or an `int` that represents an error code.
+Instead of returning `void`, `a_function` now returns `bool` to indicate whether
+an error ocurred during its execution. If `a_function` returned a pointer to
+something, the return value could be `NULL`, or an `int` that represents an
+error code.
-The code is now a) safe and b) failing gracefully, returning the control to the caller to properly handle the error case.
+The code is now a) safe and b) failing gracefully, returning the control to the
+caller to properly handle the error case.
-After seeing similar patterns on well designed APIs, I adopted this practice for my own code, but was still left with manually verifying the correctness and robustness of it.
+After seeing similar patterns on well designed APIs, I adopted this practice for
+my own code, but was still left with manually verifying the correctness and
+robustness of it.
-How could I add assertions around my code that would help me make sure the `free(s1);` exists, before getting an error report?
-How do other people and projects solve this?
+How could I add assertions around my code that would help me make sure the
+`free(s1);` exists, before getting an error report? How do other people and
+projects solve this?
-From what I could see, either people a) hope for the best, b) write safe code but don't strees-test it or c) write ad-hoc code to stress it.
+From what I could see, either people a) hope for the best, b) write safe code
+but don't strees-test it or c) write ad-hoc code to stress it.
-The most proeminent case of c) is SQLite: it has a few wrappers around the familiar `malloc` to do fault injection, check for memory limits, add warnings, create shim layers for other environments, etc.
-All of that, however, is tightly couple with SQLite itself, and couldn't be easily pulled off for using somewhere else.
+The most proeminent case of c) is SQLite: it has a few wrappers around the
+familiar `malloc` to do fault injection, check for memory limits, add warnings,
+create shim layers for other environments, etc. All of that, however, is
+tightly couple with SQLite itself, and couldn't be easily pulled off for using
+somewhere else.
-When searching for it online, an [interesting thread] caught my atention: fail the call to `malloc` for each time it is called, and when the same stacktrace appears again, allow it to proceed.
+When searching for it online, an {so-alloc}[interesting thread] caught my
+atention: fail the call to `malloc` for each time it is called, and when the
+same stacktrace appears again, allow it to proceed.
-[interesting thread]: https://stackoverflow.com/questions/1711170/unit-testing-for-failed-malloc
+== Implementation
-## Implementation
+:mallocfail: https://github.com/ralight/mallocfail
+:should-fail-fn: https://euandre.org/git/fallible/tree/src/fallible.c?id=v0.1.0#n16
-A working implementation of that already exists: [mallocfail].
-It uses `LD_PRELOAD` to replace `malloc` at run-time, computes the SHA of the stacktrace and fails once for each SHA.
+A working implementation of that already exists: {mallocfail}[mallocfail]. It
+uses `LD_PRELOAD` to replace `malloc` at run-time, computes the SHA of the
+stacktrace and fails once for each SHA.
-I initially envisioned and started implementing something very similar to mallocfail.
-However I wanted it to go beyond out-of-memory scenarios, and using `LD_PRELOAD` for every possible corner that could fail wasn't a good idea on the long run.
+I initially envisioned and started implementing something very similar to
+mallocfail. However I wanted it to go beyond out-of-memory scenarios, and using
+`LD_PRELOAD` for every possible corner that could fail wasn't a good idea on the
+long run.
-Also, mallocfail won't work together with tools such as Valgrind, who want to do their own override of `malloc` with `LD_PRELOAD`.
+Also, mallocfail won't work together with tools such as Valgrind, who want to do
+their own override of `malloc` with `LD_PRELOAD`.
-I instead went with less automatic things: starting with a `fallible_should_fail(char *filename, int lineno)` function that fails once for each `filename`+`lineno` combination, I created macro wrappers around common functions such as `malloc`:
+I instead went with less automatic things: starting with a
+`fallible_should_fail(char *filename, int lineno)` function that fails once for
+each `filename`+`lineno` combination, I created macro wrappers around common
+functions such as `malloc`:
-```c
+[source,c]
+----
void *fallible_malloc(size_t size, const char *const filename, int lineno) {
#ifdef FALLIBLE
if (fallible_should_fail(filename, lineno)) {
@@ -157,11 +186,13 @@ void *fallible_malloc(size_t size, const char *const filename, int lineno) {
}
#define MALLOC(size) fallible_malloc(size, __FILE__, __LINE__)
-```
+----
-With this definition, I could replace the calls to `malloc` with `MALLOC` (or any other name that you want to `#define`):
+With this definition, I could replace the calls to `malloc` with `MALLOC` (or
+any other name that you want to `#define`):
-```diff
+[source,diff]
+----
--- 3.c 2021-02-17 00:15:38.019706074 -0300
+++ 4.c 2021-02-17 00:44:32.306885590 -0300
@@ -1,11 +1,11 @@
@@ -178,27 +209,35 @@ With this definition, I could replace the calls to `malloc` with `MALLOC` (or an
if (!s2) {
free(s1);
return false;
-```
+----
-With this change, if the program gets compiled with the `-DFALLIBLE` flag the fault-injection mechanism will run, and `MALLOC` will fail once for each `filename`+`lineno` combination.
-When the flag is missing, `MALLOC` is a very thin wrapper around `malloc`, which compilers could remove entirely, and the `-lfallible` flags can be omitted.
+With this change, if the program gets compiled with the `-DFALLIBLE` flag the
+fault-injection mechanism will run, and `MALLOC` will fail once for each
+`filename`+`lineno` combination. When the flag is missing, `MALLOC` is a very
+thin wrapper around `malloc`, which compilers could remove entirely, and the
+`-lfallible` flags can be omitted.
-This applies not only to `malloc` or other `stdlib.h` functions.
-If `a_function` is important or relevant, I could add a wrapper around it too, that checks if `fallible_should_fail` to exercise if its callers are also doing the proper clean-up.
+This applies not only to `malloc` or other `stdlib.h` functions. If
+`a_function` is important or relevant, I could add a wrapper around it too, that
+checks if `fallible_should_fail` to exercise if its callers are also doing the
+proper clean-up.
-The actual code is just this single function, [`fallible_should_fail`], which ended-up taking only ~40 lines.
-In fact, there are more lines of either Makefile (111), README.md (82) or troff (306) on this first version.
+The actual code is just this single function,
+{should-fail-fn}[`fallible_should_fail`], which ended-up taking only ~40 lines.
+In fact, there are more lines of either Makefile (111), README.md (82) or troff
+(306) on this first version.
-The price for such fine-grained control is that this approach requires more manual work.
+The price for such fine-grained control is that this approach requires more
+manual work.
-[mallocfail]: https://github.com/ralight/mallocfail
-[`fallible_should_fail`]: https://euandre.org/git/fallible/tree/src/fallible.c?id=v0.1.0#n16
+== Usage examples
-## Usage examples
+=== `MALLOC` from the `README.md`
-### `MALLOC` from the `README.md`
+:fallible-check: https://euandreh.xyz/fallible/fallible-check.1.html
-```c
+[source,c]
+----
// leaky.c
#include <string.h>
#include <fallible_alloc.h>
@@ -221,24 +260,25 @@ int main() {
free(aaa);
return 0;
}
-```
+----
-Compile with `-DFALLIBLE` and run [`fallible-check.1`][fallible-check]:
-```shell
+Compile with `-DFALLIBLE` and run {fallible-check}[`fallible-check.1`]:
+
+[source,shell]
+----
$ c99 -DFALLIBLE -o leaky leaky.c -lfallible
$ fallible-check ./leaky
Valgrind failed when we did not expect it to:
(...suppressed output...)
# exit status is 1
-```
+----
-[fallible-check]: https://euandreh.xyz/fallible/fallible-check.1.html
+== Conclusion
-## Conclusion
+:package: https://euandre.org/git/package-repository/
-For my personal use, I'll [package] them for GNU Guix and Nix.
-Packaging it to any other distribution should be trivial, or just downloading the tarball and running `[sudo] make install`.
+For my personal use, I'll {package}[package] them for GNU Guix and Nix.
+Packaging it to any other distribution should be trivial, or just downloading
+the tarball and running `[sudo] make install`.
Patches welcome!
-
-[package]: https://euandre.org/git/package-repository/
diff --git a/src/content/blog/2021/04/29/relational-review.adoc b/src/content/blog/2021/04/29/relational-review.adoc
index e15b478..cb552c3 100644
--- a/src/content/blog/2021/04/29/relational-review.adoc
+++ b/src/content/blog/2021/04/29/relational-review.adoc
@@ -1,62 +1,73 @@
----
+= A Relational Model of Data for Large Shared Data Banks - article-review
-title: A Relational Model of Data for Large Shared Data Banks - article-review
+:empty:
+:reviewed-article: https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf
-date: 2021-04-29
+This is a review of the article "{reviewed-article}[A Relational Model of Data
+for Large Shared Data Banks]", by E. F. Codd.
-layout: post
+== Data Independence
-lang: en
+Codd brings the idea of _data independence_ as a better approach to use on
+databases. This is contrast with the existing approaches, namely hierarquical
+(tree-based) and network-based.
-ref: a-relational-model-of-data-for-large-shared-data-banks-article-review
+His main argument is that queries in applications shouldn't depende and be
+coupled with how the data is represented internally by the database system.
+This key idea is very powerful, and something that we strive for in many other
+places: decoupling the interface from the implementation.
----
+If the database system has this separation, it can kep the querying interface
+stable, while having the freedom to change its internal representation at will,
+for better performance, less storage, etc.
-This is a review of the article "[A Relational Model of Data for Large Shared Data Banks][codd-article]", by E. F. Codd.
+This is true for most modern database systems. They can change from B-Trees
+with leafs containing pointers to data, to B-Trees with leafs containing the raw
+data , to hash tables. All that without changing the query interface, only its
+performance.
-[codd-article]: https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf
+Codd mentions that, from an information representation standpoint, any index is
+a duplication, but useful for perfomance.
-## Data Independence
+This data independence also impacts ordering (a _relation_ doesn't rely on the
+insertion order).
-Codd brings the idea of *data independence* as a better approach to use on databases.
-This is contrast with the existing approaches, namely hierarquical (tree-based) and network-based.
+== Duplicates
-His main argument is that queries in applications shouldn't depende and be coupled with how the data is represented internally by the database system.
-This key idea is very powerful, and something that we strive for in many other places: decoupling the interface from the implementation.
+His definition of relational data is a bit differente from most modern database
+systems, namely *no duplicate rows*.
-If the database system has this separation, it can kep the querying interface stable, while having the freedom to change its internal representation at will, for better performance, less storage, etc.
+I couldn't find a reason behind this restriction, though. For practical
+purposes, I find it useful to have it.
-This is true for most modern database systems.
-They can change from B-Trees with leafs containing pointers to data, to B-Trees with leafs containing the raw data , to hash tables.
-All that without changing the query interface, only its performance.
+== Relational Data
-Codd mentions that, from an information representation standpoint, any index is a duplication, but useful for perfomance.
+:edn: https://github.com/edn-format/edn
-This data independence also impacts ordering (a *relation* doesn't rely on the insertion order).
+In the article, Codd doesn't try to define a language, and today's most popular
+one is SQL.
-## Duplicates
+However, there is no restriction that says that "SQL database" and "relational
+database" are synonyms. One could have a relational database without using SQL
+at all, and it would still be a relational one.
-His definition of relational data is a bit differente from most modern database systems, namely **no duplicate rows**.
+The main one that I have in mind, and the reason that led me to reading this
+paper in the first place, is Datomic.
-I couldn't find a reason behind this restriction, though.
-For practical purposes, I find it useful to have it.
+Is uses an {edn}[edn]-based representation for datalog
+queries{empty}footnote:edn-queries[
+ You can think of it as JSON, but with a Clojure taste.
+], and a particular schema used to represent data.
-## Relational Data
+Even though it looks very weird when coming from SQL, I'd argue that it ticks
+all the boxes (except for "no duplicates") that defines a relational database,
+since building relations and applying operations on them is possible.
-In the article, Codd doesn't try to define a language, and today's most popular one is SQL.
+Compare and contrast a contrived example of possible representations of SQL and
+datalog of the same data:
-However, there is no restriction that says that "SQL database" and "relational database" are synonyms.
-One could have a relational database without using SQL at all, and it would still be a relational one.
-
-The main one that I have in mind, and the reason that led me to reading this paper in the first place, is Datomic.
-
-Is uses an [edn]-based representation for datalog queries[^edn-queries], and a particular schema used to represent data.
-
-Even though it looks very weird when coming from SQL, I'd argue that it ticks all the boxes (except for "no duplicates") that defines a relational database, since building relations and applying operations on them is possible.
-
-Compare and contrast a contrived example of possible representations of SQL and datalog of the same data:
-
-```sql
+[source,sql]
+----
-- create schema
CREATE TABLE people (
id UUID PRIMARY KEY,
@@ -76,12 +87,11 @@ SELECT employees.name AS 'employee-name',
managers.name AS 'manager-name'
FROM people employees
INNER JOIN people managers ON employees.manager_id = managers.id;
-```
+----
-{% raw %}
-```
+----
;; create schema
-#{ {:db/ident :person/id
+#{{:db/ident :person/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}
@@ -93,7 +103,7 @@ INNER JOIN people managers ON employees.manager_id = managers.id;
:db/cardinality :db.cardinality/one}}
;; insert data
-#{ {:person/id #uuid "d3f29960-ccf0-44e4-be66-1a1544677441"
+#{{:person/id #uuid "d3f29960-ccf0-44e4-be66-1a1544677441"
:person/name "Foo"
:person/manager [:person/id #uuid "076356f4-1a0e-451c-b9c6-a6f56feec941"]}
{:person/id #uuid "076356f4-1a0e-451c-b9c6-a6f56feec941"
@@ -104,27 +114,29 @@ INNER JOIN people managers ON employees.manager_id = managers.id;
:where [[?person :person/name ?employee-name]
[?person :person/manager ?manager]
[?manager :person/name ?manager-name]]}
-```
-{% endraw %}
+----
-(forgive any errors on the above SQL and datalog code, I didn't run them to check. Patches welcome!)
+(forgive any errors on the above SQL and datalog code, I didn't run them to
+check. Patches welcome!)
-This employee example comes from the paper, and both SQL and datalog representations match the paper definition of "relational".
+This employee example comes from the paper, and both SQL and datalog
+representations match the paper definition of "relational".
-Both "Foo" and "Bar" are employees, and the data is normalized.
-SQL represents data as tables, and Datomic as datoms, but relations could be derived from both, which we could view as:
+Both "Foo" and "Bar" are employees, and the data is normalized. SQL represents
+data as tables, and Datomic as datoms, but relations could be derived from both,
+which we could view as:
-```
+....
employee_name | manager_name
----------------------------
"Foo" | "Bar"
-```
-
-[^edn-queries]: You can think of it as JSON, but with a Clojure taste.
-[edn]: https://github.com/edn-format/edn
+....
-## Conclusion
+== Conclusion
-The article also talks about operators, consistency and normalization, which are now so widespread and well-known that it feels a bit weird seeing someone advocating for it.
+The article also talks about operators, consistency and normalization, which are
+now so widespread and well-known that it feels a bit weird seeing someone
+advocating for it.
-I also stablish that `relational != SQL`, and other databases such as Datomic are also relational, following Codd's original definition.
+I also stablish that `relational != SQL`, and other databases such as Datomic
+are also relational, following Codd's original definition.
diff --git a/src/content/pastebins/2016/04/05/rpn.adoc b/src/content/pastebins/2016/04/05/rpn.adoc
index 25ca6ba..c567d0d 100644
--- a/src/content/pastebins/2016/04/05/rpn.adoc
+++ b/src/content/pastebins/2016/04/05/rpn.adoc
@@ -1,18 +1,7 @@
----
+= RPN macro setup
-title: RPN macro setup
-
-date: 2016-04-05
-
-layout: post
-
-lang: en
-
-ref: rpn-macro-setup
-
----
-
-```lisp
+[source,lisp]
+----
(defmacro rpn (body)
(rpn-expander body))
@@ -31,4 +20,4 @@ Just a quick stub.
One could easily improve #'RPN-EXPANDER in order to better suit one's needs.
|#
-```
+----
diff --git a/src/content/pastebins/2018/07/11/nix-pinning.adoc b/src/content/pastebins/2018/07/11/nix-pinning.adoc
index 2d35e09..d4bc905 100644
--- a/src/content/pastebins/2018/07/11/nix-pinning.adoc
+++ b/src/content/pastebins/2018/07/11/nix-pinning.adoc
@@ -1,20 +1,7 @@
----
+= Nix pinning
-title: Nix pinning
-
-date: 2018-07-11
-
-layout: post
-
-lang: en
-
-eu_categories: nix
-
-ref: nix-pinning
-
----
-
-```nix
+[source,nix]
+----
let
# Pin the nixpkgs version
stdenv = pkgs.stdenv;
@@ -35,4 +22,4 @@ in rec {
patches = [];
};
}
-```
+----
diff --git a/src/content/pastebins/2018/07/13/guix-nixos-systemd.adoc b/src/content/pastebins/2018/07/13/guix-nixos-systemd.adoc
index c2b8b62..729e981 100644
--- a/src/content/pastebins/2018/07/13/guix-nixos-systemd.adoc
+++ b/src/content/pastebins/2018/07/13/guix-nixos-systemd.adoc
@@ -1,20 +1,7 @@
----
+= GNU Guix systemd daemon for NixOS
-title: GNU Guix systemd daemon for NixOS
-
-date: 2018-07-13
-
-layout: post
-
-lang: en
-
-eu_categories: nix,guix
-
-ref: gnu-guix-systemd-daemon-for-nixos
-
----
-
-```nix
+[source,nix]
+----
# Derived from Guix guix-daemon.service.in
# https://git.savannah.gnu.org/cgit/guix.git/tree/etc/guix-daemon.service.in?id=00c86a888488b16ce30634d3a3a9d871ed6734a2
systemd.services.guix-daemon = {
@@ -30,4 +17,4 @@ ref: gnu-guix-systemd-daemon-for-nixos
};
wantedBy = [ "multi-user.target" ];
};
-```
+----
diff --git a/src/content/pastebins/2018/07/13/guixbuilder-nixos.adoc b/src/content/pastebins/2018/07/13/guixbuilder-nixos.adoc
index 880d347..b000cd3 100644
--- a/src/content/pastebins/2018/07/13/guixbuilder-nixos.adoc
+++ b/src/content/pastebins/2018/07/13/guixbuilder-nixos.adoc
@@ -1,20 +1,7 @@
----
+= Guix users in NixOS system configuration
-title: Guix users in NixOS system configuration
-
-date: 2018-07-13
-
-layout: post
-
-lang: en
-
-eu_categories: nix,guix
-
-ref: guix-users-in-nixos-system-configuration
-
----
-
-```nix
+[source,nix]
+----
users = {
mutableUsers = false;
@@ -50,4 +37,4 @@ ref: guix-users-in-nixos-system-configuration
name = "guixbuild";
};
};
-```
+----
diff --git a/src/content/pastebins/2018/07/13/guixbuilder.adoc b/src/content/pastebins/2018/07/13/guixbuilder.adoc
index 82204a8..84e84d9 100644
--- a/src/content/pastebins/2018/07/13/guixbuilder.adoc
+++ b/src/content/pastebins/2018/07/13/guixbuilder.adoc
@@ -1,20 +1,7 @@
----
+= Guix builder user creation commands
-title: Guix builder user creation commands
-
-date: 2018-07-13
-
-layout: post
-
-lang: en
-
-eu_categories: guix
-
-ref: guix-builder-user-creation-commands
-
----
-
-```shell
+[source,shell]
+----
groupadd --system guixbuild
for i in `seq -w 1 10`;
do
@@ -23,4 +10,4 @@ do
-c "Guix build user $i" --system \
guixbuilder$i;
done
-```
+----
diff --git a/src/content/pastebins/2018/07/13/nix-strpad.adoc b/src/content/pastebins/2018/07/13/nix-strpad.adoc
index 359bda5..0557b79 100644
--- a/src/content/pastebins/2018/07/13/nix-strpad.adoc
+++ b/src/content/pastebins/2018/07/13/nix-strpad.adoc
@@ -1,19 +1,6 @@
----
+= Nix string padding
-title: Nix string padding
-
-date: 2018-07-13
-
-layout: post
-
-lang: en
-
-eu_categories: nix
-
-ref: nix-string-padding
-
----
-
-```nix
+[source,nix]
+----
padString = (n: if n < 10 then "0" + toString n else toString n)
-```
+----
diff --git a/src/content/pastebins/2018/07/25/nix-exps.adoc b/src/content/pastebins/2018/07/25/nix-exps.adoc
index 23d75b6..824be69 100644
--- a/src/content/pastebins/2018/07/25/nix-exps.adoc
+++ b/src/content/pastebins/2018/07/25/nix-exps.adoc
@@ -1,20 +1,7 @@
----
+= Nix exps
-title: Nix exps
-
-date: 2018-07-25
-
-layout: post
-
-lang: en
-
-eu_categories: nix
-
-ref: nix-exps
-
----
-
-```nix
+[source,nix]
+----
let
pkgsOriginal = import <nixpkgs> {};
pkgsSrc = pkgsOriginal.fetchzip {
@@ -55,4 +42,4 @@ in rec {
'';
};
}
-```
+----
diff --git a/src/content/pastebins/2018/07/25/nix-showdrv.adoc b/src/content/pastebins/2018/07/25/nix-showdrv.adoc
index 813965d..d7a2d9a 100644
--- a/src/content/pastebins/2018/07/25/nix-showdrv.adoc
+++ b/src/content/pastebins/2018/07/25/nix-showdrv.adoc
@@ -1,20 +1,7 @@
----
+= nix show-derivation sample output
-title: nix show-derivation sample output
-
-date: 2018-07-25
-
-layout: post
-
-lang: en
-
-eu_categories: nix
-
-ref: nix-show-derivation-sample-output
-
----
-
-```nix
+[source,nix]
+----
$ nix show-derivation /nix/store/zzz9cl2ly0mb2njr7vwa5528fxmn29m8-combofont-0.2.drv
{
"/nix/store/zzz9cl2ly0mb2njr7vwa5528fxmn29m8-combofont-0.2.drv": {
@@ -83,4 +70,4 @@ $ nix show-derivation /nix/store/zzz9cl2ly0mb2njr7vwa5528fxmn29m8-combofont-0.2.
}
}
}
-```
+----
diff --git a/src/content/pastebins/2019/06/08/inconsistent-hash.adoc b/src/content/pastebins/2019/06/08/inconsistent-hash.adoc
index 51d8ad3..5877018 100644
--- a/src/content/pastebins/2019/06/08/inconsistent-hash.adoc
+++ b/src/content/pastebins/2019/06/08/inconsistent-hash.adoc
@@ -1,35 +1,24 @@
----
+= Inconsistent hash of buildGoModule
-title: Inconsistent hash of buildGoModule
+:commit: https://euandre.org/git/servers/commit?id=6ba76140238b5e3c7009c201f9f80ac86063f438
-date: 2019-06-08
+''''
-layout: post
+FIXED: The `<nixpkgs>` was different on different environments. See
+https://discourse.nixos.org/t/inconsistent-hash-of-buildgomodule/3127/2.
-lang: en
+''''
-eu_categories: nix
+The {commit}[commit that made this visible].
-ref: inconsistent-hash-of-buildgomodule
+== Offending derivation:
----
+:orig-src: https://euandre.org/git/servers/tree/default.nix?id=6ba76140238b5e3c7009c201f9f80ac86063f438#n3
-FIXED: The `<nixpkgs>` was different on different environments.
-See <https://discourse.nixos.org/t/inconsistent-hash-of-buildgomodule/3127/2>.
+{orig-src}[Full source code on the repository]:
----
-
-The [commit that made this visible][0].
-
-[0]: https://euandre.org/git/servers/commit?id=6ba76140238b5e3c7009c201f9f80ac86063f438
-
-## Offending derivation:
-
-[Full source code on the repository][1]:
-
-[1]: https://euandre.org/git/servers/tree/default.nix?id=6ba76140238b5e3c7009c201f9f80ac86063f438#n3
-
-```nix
+[source,nix]
+----
terraform-godaddy = pkgs.buildGoModule rec {
name = "terraform-godaddy-${version}";
version = "1.6.4";
@@ -43,11 +32,12 @@ terraform-godaddy = pkgs.buildGoModule rec {
postInstall =
"mv $out/bin/terraform-godaddy $out/bin/terraform-provider-godaddy";
};
-```
+----
-## Local build:
+== Local build:
-```shell
+[source,shell]
+----
$ nix-build -A terraform-godaddy
these derivations will be built:
/nix/store/3hs274i9qdsg3hsgp05j7i5cqxsvpcqx-terraform-godaddy-1.6.4-go-modules.drv
@@ -193,14 +183,15 @@ hash mismatch in fixed-output derivation '/nix/store/jgbfkhlsz6bmq724p5cqqcgfyc7
got: sha256:10n2dy7q9kk1ly58sw965n6qa8l0nffh8vyd1vslx0gdlyj25xxs
cannot build derivation '/nix/store/y5961vv6y9c0ps2sbd8xfnpqvk0q7qhq-terraform-godaddy-1.6.4.drv': 1 dependencies couldn't be built
error: build of '/nix/store/y5961vv6y9c0ps2sbd8xfnpqvk0q7qhq-terraform-godaddy-1.6.4.drv' failed
-```
+----
-## Build [on CI](https://builds.sr.ht/~euandreh/job/67836#task-setup-0):
+== Build https://builds.sr.ht/~euandreh/job/67836#task-setup-0[on CI]:
-The `setup.sh` script contains a call to `nix-shell` which in turns
-build the same `terraform-godaddy` derivation:
+The `setup.sh` script contains a call to `nix-shell` which in turns build the
+same `terraform-godaddy` derivation:
-```shell
+[source,shell]
+----
$ cd vps/
$ ./scripts/ci/setup.sh
warning: Nix search path entry '/nix/var/nix/profiles/per-user/root/channels' does not exist, ignoring
@@ -1058,4 +1049,4 @@ hash mismatch in fixed-output derivation '/nix/store/q8y0mzjl78hfhazjgq2sc84i7dp
cannot build derivation '/nix/store/w4ghinrmpq524k3617ikfc8i42aa0dbb-terraform-godaddy-1.6.4.drv': 1 dependencies couldn't be built
copying path '/nix/store/63gjp25l4cmdkl63zy0rcgmsvd2p2p34-terraform-0.11.14' from 'https://cache.nixos.org'...
error: build of '/nix/store/9drkn1qxkkcrz5g3413lpmbc2xysa582-terraform-0.11.14.drv', '/nix/store/w4ghinrmpq524k3617ikfc8i42aa0dbb-terraform-godaddy-1.6.4.drv' failed
-```
+----
diff --git a/src/content/pastebins/2019/12/29/raku-tuple-type.adoc b/src/content/pastebins/2019/12/29/raku-tuple-type.adoc
index 3d5ff34..8bb7250 100644
--- a/src/content/pastebins/2019/12/29/raku-tuple-type.adoc
+++ b/src/content/pastebins/2019/12/29/raku-tuple-type.adoc
@@ -1,18 +1,7 @@
----
+= Raku tuple type annotation
-title: Raku tuple type annotation
-
-date: 2019-12-29
-
-layout: post
-
-lang: en
-
-ref: raku-tuple-type-annotation
-
----
-
-```perl
+[source,perl]
+----
# Single Str return value: this works
sub f1(Str $in --> Str) {
$in;
@@ -27,11 +16,12 @@ sub f2(Str $in) {
sub f2(Str $in --> (Str, Str)) {
($in, $in);
}
-```
+----
Error log is:
-```perl
+[source,perl]
+----
===SORRY!=== Error while compiling /path/to/my/file
Malformed return value
-```
+----
diff --git a/src/content/pastebins/2020/01/04/guix-import-failure.adoc b/src/content/pastebins/2020/01/04/guix-import-failure.adoc
index 3388a8d..9e5d987 100644
--- a/src/content/pastebins/2020/01/04/guix-import-failure.adoc
+++ b/src/content/pastebins/2020/01/04/guix-import-failure.adoc
@@ -1,20 +1,7 @@
----
+= Failure on Guix TeX Live importer
-title: Failure on Guix TeX Live importer
-
-date: 2020-01-04
-
-layout: post
-
-lang: en
-
-eu_categories: guix
-
-ref: failure-on-guix-tex-live-importer
-
----
-
-```shell
+[source,shell]
+----
$ guix import texlive fontspec
redirection vers « https://ctan.org/xml/1.2/pkg/fontspec »...
Backtrace:
@@ -44,4 +31,4 @@ In guix/build/utils.scm:
guix/build/utils.scm:652:6: In procedure invoke:
Throw to key `srfi-34' with args `(#<condition &invoke-error [program: "svn" arguments: ("export" "--non-interactive" "--trust-server-cert" "-r" "49435" "svn://www.tug.org/texlive/tags/texlive-2018.2/Master/texmf-dist/source/latex/fontspec" "/tmp/guix-directory.WtLohP") exit-status: 1 term-signal: #f stop-signal: #f] 7fe80d229c80>)'.
-```
+----
diff --git a/src/content/pastebins/2020/02/14/guix-shebang.adoc b/src/content/pastebins/2020/02/14/guix-shebang.adoc
index 67d504d..862b12f 100644
--- a/src/content/pastebins/2020/02/14/guix-shebang.adoc
+++ b/src/content/pastebins/2020/02/14/guix-shebang.adoc
@@ -1,23 +1,10 @@
----
+= Guix shebang
-title: Guix shebang
-
-date: 2020-02-14
-
-layout: post
-
-lang: en
-
-eu_categories: guix
-
-ref: guix-shebang
-
----
-
-```shell
+[source,shell]
+----
#!/usr/bin/env -S guix environment --ad-hoc bash -- bash
set -Eeuo pipefail
cd "$(dirname "${BASH_SOURCE[0]}")"
pwd
-```
+----
diff --git a/src/content/pastebins/2020/11/27/guix-build-local.adoc b/src/content/pastebins/2020/11/27/guix-build-local.adoc
index 350d50f..e703ba6 100644
--- a/src/content/pastebins/2020/11/27/guix-build-local.adoc
+++ b/src/content/pastebins/2020/11/27/guix-build-local.adoc
@@ -1,25 +1,14 @@
----
+= Guix build local module
-title: Guix build local module
+FIXED: rename `name` on line 9 of the first snippet, and use `"my-hello"`
+instead of `"hello"`.
-date: 2020-11-27
-
-layout: post
-
-lang: en
-
-eu_categories: guix
-
-ref: guix-build-local-module
-
----
-
-FIXED: rename `name` on line 9 of the first snippet, and use `"my-hello"` instead of `"hello"`.
-
----
+'''''
Inside a file named `build.scm`:
-```scheme
+
+[source,scheme]
+----
(define-module (build)
#:use-module (guix packages)
#:use-module (guix download)
@@ -42,19 +31,20 @@ Inside a file named `build.scm`:
(description "")
(home-page "")
(license gpl3+)))
-
-```
+----
A plain build command didn't work:
-```shell
+[source,shell]
+----
$ guix build -L. my-hello
guix build: error: my-hello : paquet inconnu
-```
+----
But with an eval expression it did:
-```shell
+[source,shell]
+----
$ guix build -L. -e '(@ (build) my-hello)'
# works
-```
+----
diff --git a/src/content/pastebins/2020/12/15/guix-pack-fail.adoc b/src/content/pastebins/2020/12/15/guix-pack-fail.adoc
index 2834f90..72172db 100644
--- a/src/content/pastebins/2020/12/15/guix-pack-fail.adoc
+++ b/src/content/pastebins/2020/12/15/guix-pack-fail.adoc
@@ -1,28 +1,15 @@
----
+= Failure with relocatable Guix pack tarball
-title: Failure with relocatable Guix pack tarball
-
-date: 2020-12-15
-
-layout: post
-
-lang: en
-
-eu_categories: guix
-
-ref: failure-with-relocatable-guix-pack-tarball
-
----
+:post: https://guix.gnu.org/blog/2018/tarballs-the-ultimate-container-image-format/
FIXED: Use `GUIX_PROFILE= source etc/profile`
----
+'''''
-The example from the [blog post][guix-tarball-article] fails.
+The example from the {post}[blog post] fails.
-[guix-tarball-article]: https://guix.gnu.org/blog/2018/tarballs-the-ultimate-container-image-format/
-
-```shell
+[source,shell]
+----
$ tar xf `guix pack --relocatable -S /bin=bin -S /etc=etc guile gnutls guile-json`
$ source etc/profile
$ bin/guile -c '(use-modules (json))'
@@ -83,14 +70,16 @@ In ice-9/boot-9.scm:
ice-9/boot-9.scm:3300:6: In procedure resolve-interface:
no code for module (gnutls)
-```
+----
My Guix version if fairly recent:
-```shell
+
+[source,shell]
+----
$ guix describe
Génération 83 14 déc. 2020 00:28:16 (actuelle)
guix 41807eb
URL du dépôt : https://git.savannah.gnu.org/git/guix.git
branche: master
commit : 41807eb5329299b8c45cd49356a4ead01ce0d469
-```
+----
diff --git a/src/content/pastebins/2021/04/03/naive-slugify-js.adoc b/src/content/pastebins/2021/04/03/naive-slugify-js.adoc
index f765495..648e5fa 100644
--- a/src/content/pastebins/2021/04/03/naive-slugify-js.adoc
+++ b/src/content/pastebins/2021/04/03/naive-slugify-js.adoc
@@ -1,20 +1,7 @@
----
+= JavaScript naive slugify
-title: JavaScript naive slugify
-
-date: 2021-04-03
-
-updated_at: 2021-08-15
-
-layout: post
-
-lang: en
-
-ref: javascript-naive-slugify
-
----
-
-```javascript
+[source,javascript]
+----
const s = "Pézão: açaí, saci-pererê.";
const slugify = s =>
@@ -37,4 +24,4 @@ const slugify = s =>
.replaceAll("ç", "c");
console.log(slugify(s));
-```
+----
diff --git a/src/content/pastebins/2021/06/08/reading-session-pt1.adoc b/src/content/pastebins/2021/06/08/reading-session-pt1.adoc
index b97ef08..d9f1f91 100644
--- a/src/content/pastebins/2021/06/08/reading-session-pt1.adoc
+++ b/src/content/pastebins/2021/06/08/reading-session-pt1.adoc
@@ -1,18 +1,7 @@
----
+= Debit Reading Session - SICP solutions pt.1
-title: Debit Reading Session - SICP solutions pt.1
-
-date: 2021-06-08
-
-layout: post
-
-lang: en
-
-ref: debit-reading-session-sicp-solutions-pt-1
-
----
-
-```scheme
+[source,scheme]
+----
;; 1.41
(define (double f)
(lambda (x)
@@ -72,6 +61,6 @@ ref: debit-reading-session-sicp-solutions-pt-1
;;; 2.32
TODO
-```
+----
FYI: I just typed those in, I didn't yet test them yet.
diff --git a/src/content/pastebins/2021/06/22/curl-wget.adoc b/src/content/pastebins/2021/06/22/curl-wget.adoc
index 1030c7b..08caf12 100644
--- a/src/content/pastebins/2021/06/22/curl-wget.adoc
+++ b/src/content/pastebins/2021/06/22/curl-wget.adoc
@@ -1,20 +1,9 @@
----
-
-title: "cloc: curl and wget"
-
-date: 2021-06-22
-
-layout: post
-
-lang: en
-
-ref: cloc-curl-and-wget
-
----
+= "cloc: curl and wget"
`curl`:
-```shell
+[source,shell]
+----
$ pushd `mktemp -d`
/tmp/tmp.AZkwvk7azD ~/
$ git clone git://github.com/curl/curl .
@@ -62,11 +51,12 @@ TOML 1 0 0
------------------------------------------------------------------------------------
SUM: 1309 39501 56083 254078
------------------------------------------------------------------------------------
-```
+----
`wget`:
-```shell
+[source,shell]
+----
$ pushd `mktemp -d`
/tmp/tmp.NX0udlJMiz ~/
$ git clone git://git.savannah.gnu.org/wget.git .
@@ -99,4 +89,4 @@ lex 1 29 65 73
--------------------------------------------------------------------------------
SUM: 342 11400 14185 52018
--------------------------------------------------------------------------------
-```
+----
diff --git a/src/content/pastebins/2021/08/11/h1-spacing.adoc b/src/content/pastebins/2021/08/11/h1-spacing.adoc
index 9a00ece..e8a6986 100644
--- a/src/content/pastebins/2021/08/11/h1-spacing.adoc
+++ b/src/content/pastebins/2021/08/11/h1-spacing.adoc
@@ -1,29 +1,17 @@
----
+= Spaces around h1 tags
-title: Spaces around h1 tags
+// FIXME updated_at: 2021-08-15
-date: 2021-08-11
+_EDIT_: Apparently, the behaviour below is consistent between Firefox and
+Chromium for links, but not for `<h1>`. My conclusion is that the `<h1>`
+behaviour is a Firefox quirk, but the `<a>` is expected.
-updated_at: 2021-08-15
-
-layout: post
-
-lang: en
-
-ref: spaces-around-h1-tags
-
----
-
-*EDIT*: Apparently, the behaviour below is consistent between Firefox and
-Chromium for links, but not for `<h1>`.
-My conclusion is that the `<h1>` behaviour is a Firefox quirk, but the `<a>` is
-expected.
-
----
+'''''
The HTML below has selectable extra spaces after `<h1>` tags:
-```html
+[source,html]
+----
<!DOCTYPE html>
<html lang="en">
<head>
@@ -43,23 +31,22 @@ The HTML below has selectable extra spaces after `<h1>` tags:
</main>
</body>
</html>
-```
+----
The rendered output is:
-<h1>
- With spaces around when selecting this heading
-</h1>
-<h1>Without spaces around</h1>
-<p>
- Is this expected behaviour?
-</p>
+With spaces around when selecting this heading
+
+Without spaces around
----
+Is this expected behaviour?
+
+'''''
The same with links:
-```html
+[source,html]
+----
<!DOCTYPE html>
<html lang="en">
<head>
@@ -80,17 +67,10 @@ The same with links:
</main>
</body>
</html>
-```
+----
The rendered output is:
-<p>
- <a href="#">
- With extra underlined space
- </a>
- after the link.
-</p>
-<p>
- <a href="#">Without extra underlined space</a>
- after the link.
-</p>
+With extra underlined space after the link.
+
+Without extra underlined space after the link.
diff --git a/src/content/pastebins/2021/09/02/sicp-3-19.adoc b/src/content/pastebins/2021/09/02/sicp-3-19.adoc
index 75ee346..f6aa8cf 100644
--- a/src/content/pastebins/2021/09/02/sicp-3-19.adoc
+++ b/src/content/pastebins/2021/09/02/sicp-3-19.adoc
@@ -1,18 +1,7 @@
----
+= SICP exercise 3.19
-title: SICP exercise 3.19
-
-date: 2021-09-02
-
-layout: post
-
-lang: en
-
-ref: sicp-exercise-3-19
-
----
-
-```scheme
+[source,scheme]
+----
(define (cycle? l)
(define (rec l x)
(cond
@@ -20,11 +9,12 @@ ref: sicp-exercise-3-19
((eq? l x) true)
(true (rec l (cdr x)))))
(rec l (cdr l)))
-```
+----
Sample interactive session:
-```scheme
+[source,scheme]
+----
scheme@(guile-user)> (define true #t)
scheme@(guile-user)> (define false #f)
scheme@(guile-user)>
@@ -39,4 +29,4 @@ scheme@(guile-user)> (cycle? '(1 2 3))
$9 = #f
scheme@(guile-user)> (cycle? (make-cycle '(1 2 3)))
$10 = #t
-```
+----
diff --git a/src/content/pastebins/2021/09/03/sicp-persistent-queue.adoc b/src/content/pastebins/2021/09/03/sicp-persistent-queue.adoc
index 8cf7ea2..2b4a8a2 100644
--- a/src/content/pastebins/2021/09/03/sicp-persistent-queue.adoc
+++ b/src/content/pastebins/2021/09/03/sicp-persistent-queue.adoc
@@ -1,36 +1,25 @@
----
+= SICP persistent amortized O(1) queue
-title: SICP persistent amortized O(1) queue
-
-date: 2021-09-03
-
-layout: post
-
-lang: en
-
-ref: sicp-persistent-amortized-o1-queue
-
----
-
-```scheme
+[source,scheme]
+----
(define (queue)
(cons '()
- '()))
+ '()))
(define (enqueue x q)
(cons (car q)
- (cons x (cdr q))))
+ (cons x (cdr q))))
(define (flush q)
(cons (reverse (cdr q))
- '()))
+ '()))
(define (dequeue q)
(if (null? (car q))
(dequeue (flush q))
(cons (caar q)
- (cons (cdar q)
- (cdr q)))))
+ (cons (cdar q)
+ (cdr q)))))
(define (empty? q)
(and (null? (car q))
@@ -43,7 +32,7 @@ ref: sicp-persistent-amortized-o1-queue
(define (rec l leading-space?)
(when (not (null? l))
(when leading-space?
- (display " "))
+ (display " "))
(display (car l))
(rec (cdr l) #t)))
@@ -52,10 +41,12 @@ ref: sicp-persistent-amortized-o1-queue
(rec (reverse (cdr q)) (not (null? (car q))))
(display ")")
(newline))
-```
+----
Sample interactive session:
-```scheme
+
+[source,scheme]
+----
scheme@(guile-user)> (define true #t)
scheme@(guile-user)> (define false #f)
scheme@(guile-user)> (define q (queue))
@@ -82,4 +73,4 @@ scheme@(guile-user)> (print-queue q)
#q(b c d e)
scheme@(guile-user)> (print-queue (cdr (dequeue (cdr (dequeue (enqueue 'g (enqueue 'f q)))))))
#q(d e f g)
-```
+----
diff --git a/src/content/pastebins/2022/07/14/git-cleanup.adoc b/src/content/pastebins/2022/07/14/git-cleanup.adoc
index 52cd17f..b223f86 100644
--- a/src/content/pastebins/2022/07/14/git-cleanup.adoc
+++ b/src/content/pastebins/2022/07/14/git-cleanup.adoc
@@ -1,18 +1,7 @@
----
+= git-cleanup command
-title: git-cleanup command
-
-date: 2022-07-14
-
-layout: post
-
-lang: en
-
-ref: git-cleanup-command
-
----
-
-```
+[source,sh]
+----
#!/bin/sh
set -eu
@@ -67,4 +56,4 @@ shift $((OPTIND - 1))
git branch --merged |
grep -v -e '^\*' -e '^. main$' |
xargs git branch -d
-```
+----
diff --git a/src/content/pastebins/2023/07/22/funcallable-amop.adoc b/src/content/pastebins/2023/07/22/funcallable-amop.adoc
index 37c79fe..47a8089 100644
--- a/src/content/pastebins/2023/07/22/funcallable-amop.adoc
+++ b/src/content/pastebins/2023/07/22/funcallable-amop.adoc
@@ -1,22 +1,10 @@
----
-
-title: Funcallable AMOP
-
-date: 2023-07-22
-
-layout: post
-
-lang: en
-
-ref: funcallable-amop
-
----
-
+= Funcallable AMOP
Using `macrolet` to allow a `funcallable-standard-class` to be invoked without
using `funcall` directly, and let the macroexpansion do that instead:
-```
+[source,lisp]
+----
#!/usr/bin/env li
(asdf:load-system :closer-mop)
@@ -40,4 +28,4 @@ using `funcall` directly, and let the macroexpansion do that instead:
`(funcall c ,@body)))
(funcall c 2)
(c 3)))
-```
+----
diff --git a/src/content/podcasts/2020/12/19/test-entry.adoc b/src/content/podcasts/2020/12/19/test-entry.adoc
index 3ec8811..3302421 100644
--- a/src/content/podcasts/2020/12/19/test-entry.adoc
+++ b/src/content/podcasts/2020/12/19/test-entry.adoc
@@ -2,18 +2,16 @@
audio: true
-
After.
-A link to [home][home].
+A link to https://euandre.org[home].
Another link to home: https://euandre.org
-[home]: https://euandre.org
-
A code block:
-```shell
+[source,shell]
+----
$ l
total 372K
drwxr-xr-x 23 andreh users 4,0K déc. 19 10:44 ./
@@ -100,4 +98,4 @@ drwxr-xr-x 2 andreh users 4,0K déc. 19 09:04 static/
-rw-r--r-- 1 andreh users 265 nov. 15 20:01 til.md
drwxr-xr-x 2 andreh users 4,0K déc. 16 04:03 _tils/
drwxr-xr-x 3 andreh users 4,0K oct. 10 09:20 vendor/
-```
+----
diff --git a/src/content/screencasts/2021/02/07/autoqemu.adoc b/src/content/screencasts/2021/02/07/autoqemu.adoc
index e295a71..0383bc3 100644
--- a/src/content/screencasts/2021/02/07/autoqemu.adoc
+++ b/src/content/screencasts/2021/02/07/autoqemu.adoc
@@ -1,42 +1,51 @@
= AutoQEMU - automate installation and SSH setup of ISO OS images
+:empty:
+:begriffs-tips: https://begriffs.com/posts/2020-08-31-portable-stable-software.html
+:public-thread: https://talk.begriffs.com/pipermail/friends/2021-February/001263.html
+:archived: https://euandre.org/static/attachments/autoqemu.tar.gz
+:expect: https://core.tcl-lang.org/expect/index
+:script: https://www.man7.org/linux/man-pages/man1/script.1.html
+:klaatu: https://gnuworldorder.info/
+
video: true
-After reading begriffs "[Tips for stable and portable software]", the
-"Begriffs Buildfarm?" section caught my attention, as this is something I would
-be interested in.
+After reading begriffs "{begriffs-tips}[Tips for stable and portable software]",
+the "Begriffs Buildfarm?" section caught my attention, as this is something I
+would be interested in.
-After emailing the author, a [public thread] began on the subject.
+After emailing the author, a {public-thread}[public thread] began on the
+subject.
As we discussed how it could be done, I decided to experiment with the idea of
automating the setup of virtual environments with QEMU.
-This screencast is a simple demo of automating the installation of
-Alpine Linux 3.12.3 standard x86_64 with AutoQEMU[^AutoQEMU], which is nothing
-more than POSIX sh, [expect] scripts and Makefiles glued together.
-
-[^AutoQEMU]: The solution was a little too brittle to scale, and some
-distributions proved to be particularly problematic. I've [archived] my
-progress if you're interested in what I've done, and maybe wish to continue.
+This screencast is a simple demo of automating the installation of Alpine Linux
+3.12.3 standard x86_64 with
+AutoQEMU{empty}footnote:autoqemu[
+ The solution was a little too brittle to scale, and some distributions proved
+ to be particularly problematic. I've {archived}[archived] my progress if
+ you're interested in what I've done, and maybe wish to continue.
+], which is nothing more than POSIX sh, {expect}[expect] scripts and Makefiles
+glued together.
As of this writing, I just worked on it for 2~3 days, so everything is still
pretty ad-hoc.
-The commands from the screencast were[^script-command]:
+The commands from the screencast
+were{empty}footnote:script-command[
+ Only now, writing again what I ran on the screencast I thought that I should
+ have tried something like {script}[script]. Maybe next time (thanks
+ {klaatu}[klaatu] for the tip!).
+]:
-[^script-command]: Only now, writing again what I ran on the screencast I thought that I should have tried something like [script](https://www.man7.org/linux/man-pages/man1/script.1.html). Maybe next time (thanks [klaatu](https://gnuworldorder.info/) for the tip!).
-
-```shell
+[source,shell]
+----
pushd `mktemp -d`
git clone https://euandre.org/git/autoqemu .
make
make install PREFIX=$HOME/.local
autoqemu ssh alpine
-```
+----
It assumes that `$HOME/.local/bin` is in `$PATH`.
-
-[Tips for stable and portable software]: https://begriffs.com/posts/2020-08-31-portable-stable-software.html
-[public thread]: https://talk.begriffs.com/pipermail/friends/2021-February/001263.html
-[archived]: https://euandre.org/static/attachments/autoqemu.tar.gz
-[expect]: https://core.tcl-lang.org/expect/index
diff --git a/src/content/tils/2020/08/12/filename-timestamp.adoc b/src/content/tils/2020/08/12/filename-timestamp.adoc
index 7495fc9..ed7dee8 100644
--- a/src/content/tils/2020/08/12/filename-timestamp.adoc
+++ b/src/content/tils/2020/08/12/filename-timestamp.adoc
@@ -1,44 +1,30 @@
----
-
-title: Simple filename timestamp
-
-date: 2020-08-12
-
-updated_at:
-
-layout: post
-
-lang: en
-
-ref: simple-filename-timestamp
-
-eu_categories: shell
-
----
+= Simple filename timestamp
When writing Jekyll posts or creating log files with dates on them, I usually
-struggle with finding a direct way of accomplishing that. There's a simple
+struggle with finding a direct way of accomplishing that. There's a simple
solution: `date -I`.
-```shell
+[source,shell]
+----
./my-program.sh > my-program.$(date -I).log
cp post-template.md _posts/$(date -I)-post-slug.md
-```
+----
-Using this built-in GNU/Linux tool allows you to `touch $(date -I).md` to readily
-create a `2020-08-12.md` file.
+Using this built-in GNU/Linux tool allows you to `touch $(date -I).md` to
+readily create a `2020-08-12.md` file.
I always had to read `man date` or search the web over and over, and after doing
this repeatedly it became clear that both `date -I` and `date -Is` (`s` here
stands for seconds) are the thing that I'm looking for 95% of the time:
-```shell
+[source,shell]
+----
# inside my-program.sh
echo "Program started at $(date -Is)"
# output is:
# Program started at 2020-08-12T09:04:58-03:00
-```
+----
Both date formats are hierarchical, having the bigger time intervals to the
-left. This means that you can easily sort them (and even tab-complete them) with
-no extra effort or tool required.
+left. This means that you can easily sort them (and even tab-complete them)
+with no extra effort or tool required.
diff --git a/src/content/tils/2020/08/13/code-jekyll.adoc b/src/content/tils/2020/08/13/code-jekyll.adoc
index 6566928..6bd90b0 100644
--- a/src/content/tils/2020/08/13/code-jekyll.adoc
+++ b/src/content/tils/2020/08/13/code-jekyll.adoc
@@ -1,33 +1,35 @@
----
-title: Anchor headers and code lines in Jekyll
-date: 2020-08-13
-layout: post
-lang: en
-ref: anchor-headers-and-code-lines-in-jekyll
----
-The default Jekyll toolbox ([Jekyll][0], [kramdown][1] and [rouge][2]) doesn't
-provide with a configuration option to add anchors to headers and code blocks.
-
-[0]: https://jekyllrb.com/
-[1]: https://kramdown.gettalong.org/
-[2]: http://rouge.jneen.net/
+= Anchor headers and code lines in Jekyll
-The best way I found of doing this is by creating a simple Jekyll plugin, more
-specifically, a [Jekyll hook][3]. These allow you to jump in to the Jekyll build
-and add a processing stage before of after Jekyll performs something.
+:empty:
+:jekyll: https://jekyllrb.com/
+:kramdown: https://kramdown.gettalong.org/
+:rouge: https://rouge.jneen.net/
+:jekyll-hook: https://jekyllrb.com/docs/plugins/hooks/
+
+The default Jekyll toolbox ({jekyll}[Jekyll], {kramdown}[kramdown] and
+{rouge}[rouge]) doesn't provide with a configuration option to add anchors to
+headers and code blocks.
-[3]: https://jekyllrb.com/docs/plugins/hooks/
+The best way I found of doing this is by creating a simple Jekyll plugin, more
+specifically, a {jekyll-hook}[Jekyll hook]. These allow you to jump in to the
+Jekyll build and add a processing stage before of after Jekyll performs
+something.
All you have to do is add the code to `_plugins/my-jekyll-plugin-code.rb`, and
Jekyll knows to pick it up and call your code on the appropriate time.
-## Anchor on headers
+== Anchor on headers
+
+:jemoji: https://github.com/jekyll/jemoji
+:jekyll-mentions: https://github.com/jekyll/jekyll-mentions
+:html-regex: https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
Since I wanted to add anchors to headers in all documents, this Jekyll hook
works on `:documents` after they have been transformed into HTML, the
`:post_render` phase:
-```ruby
+[source,ruby]
+----
Jekyll::Hooks.register :documents, :post_render do |doc|
if doc.output_ext == ".html"
doc.output =
@@ -37,30 +39,30 @@ Jekyll::Hooks.register :documents, :post_render do |doc|
)
end
end
-```
-
-I've derived my implementations from two "official"[^official] hooks,
-[jemoji][4] and [jekyll-mentions][5].
+----
-[4]: https://github.com/jekyll/jemoji
-[5]: https://github.com/jekyll/jekyll-mentions
-[^official]: I don't know how official they are, I just assumed it because they
- live in the same organization inside GitHub that Jekyll does.
+I've derived my implementations from two
+"official"{empty}footnote:official[
+ I don't know how official they are, I just assumed it because they live in the
+ same organization inside GitHub that Jekyll does.
+] hooks, {jemoji}[jemoji] and {jekyll-mentions}[jekyll-mentions].
All I did was to wrap the header tag inside an `<a>`, and set the `href` of that
-`<a>` to the existing id of the header. Before the hook the HTML looks like:
+`<a>` to the existing id of the header. Before the hook the HTML looks like:
-```html
+[source,html]
+----
...some unmodified text...
<h2 id="my-header">
My header
</h2>
...more unmodified text...
-```
+----
And after the hook should turn that into:
-```html
+[source,html]
+----
...some unmodified text...
<a href="#my-header">
<h2 id="my-header">
@@ -68,20 +70,18 @@ And after the hook should turn that into:
</h2>
</a>
...more unmodified text...
-```
+----
The used regexp tries to match only h1-h6 tags, and keep the rest of the HTML
-attributes untouched, since this isn't a general HTML parser, but the generated HTML
-is somewhat under your control. Use at your own risk because
-[you shouldn't parse HTML with regexps][6]. Also I used this strategy in my
-environment, where no other plugins are installed. I haven't considered how this
-approach may conflict with other Jekyll plugins.
-
-[6]: https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
+attributes untouched, since this isn't a general HTML parser, but the generated
+HTML is somewhat under your control. Use at your own risk because
+{html-regex}[you shouldn't parse HTML with regexps]. Also I used this strategy
+in my environment, where no other plugins are installed. I haven't considered
+how this approach may conflict with other Jekyll plugins.
In the new anchor tag you can add your custom CSS class to style it as you wish.
-## Anchor on code blocks
+== Anchor on code blocks
Adding anchors to code blocks needs a little bit of extra work, because line
numbers themselves don't have preexisting ids, so we need to generate them
@@ -90,7 +90,8 @@ without duplications between multiple code blocks in the same page.
Similarly, this Jekyll hook also works on `:documents` in the `:post_render`
phase:
-```ruby
+[source,ruby]
+----
PREFIX = '<pre class="lineno">'
POSTFIX = '</pre>'
Jekyll::Hooks.register :documents, :post_render do |doc|
@@ -111,25 +112,27 @@ Jekyll::Hooks.register :documents, :post_render do |doc|
end
end
end
-```
+----
This solution assumes the default Jekyll toolbox with code line numbers turned
on in `_config.yml`:
-```yaml
+[source,yaml]
+----
kramdown:
syntax_highlighter_opts:
span:
line_numbers: false
block:
line_numbers: true
-```
+----
The anchors go from B1-L1 to BN-LN, using the `code_block_counter` to track
-which code block we're in and don't duplicate anchor ids. Before the hook the
+which code block we're in and don't duplicate anchor ids. Before the hook the
HTML looks like:
-```html
+[source,html]
+----
...some unmodified text...
<pre class="lineno">1
2
@@ -138,11 +141,12 @@ HTML looks like:
5
</pre>
...more unmodified text...
-```
+----
And after the hook should turn that into:
-```html
+[source,html]
+----
...some unmodified text...
<pre class="lineno"><a id="B1-L1" href="#B1-L1">1</a>
<a id="B1-L2" href="#B1-L2">2</a>
@@ -150,6 +154,6 @@ And after the hook should turn that into:
<a id="B1-L4" href="#B1-L4">4</a>
<a id="B1-L5" href="#B1-L5">5</a></pre>
...more unmodified text...
-```
+----
Happy writing :)
diff --git a/src/content/tils/2020/08/14/browse-git.adoc b/src/content/tils/2020/08/14/browse-git.adoc
index d06f0c1..556288c 100644
--- a/src/content/tils/2020/08/14/browse-git.adoc
+++ b/src/content/tils/2020/08/14/browse-git.adoc
@@ -1,28 +1,15 @@
----
-
-title: Browse a git repository at a specific commit
-
-date: 2020-08-14
-
-layout: post
-
-lang: en
-
-ref: browse-a-git-repository-at-a-specific-commit
-
-eu_categories: git
-
----
+= Browse a git repository at a specific commit
I commonly use tools like `git log` together with `git show` when inspecting
past changes in a repository:
-```shell
+[source,shell]
+----
git log
# search for a the commit I'm looking for
git show <my-commit>
# see the diff for the commit
-```
+----
But I also wanted to not only be able to look at the diff of a specific commit,
but to browse the whole repository at that specific commit.
@@ -30,55 +17,59 @@ but to browse the whole repository at that specific commit.
I used to accomplish it the "brute force" way: clone the whole repository in
another folder and checkout the commit there:
-```shell
+[source,shell]
+----
git clone <original-repo> /tmp/tmp-repo-clone
cd /tmp-repo-clone
git checkout <my-commit>
-```
+----
But git itself allows we to specific the directory of the checkout by using the
-`--work-tree` global git flag. This is what `man git` says about it:
+`--work-tree` global git flag. This is what `man git` says about it:
-```txt
+[source,txt]
+----
--work-tree=<path>
Set the path to the working tree. It can be an absolute path or a path relative to the current working
directory. This can also be controlled by setting the GIT_WORK_TREE environment variable and the
core.worktree configuration variable (see core.worktree in git-config(1) for a more detailed
discussion).
-```
+----
-So it allows us to set the desired path of the working tree. So if we want to
+So it allows us to set the desired path of the working tree. So if we want to
copy the contents of the current working tree into `copy/`:
-```shell
+[source,shell]
+----
mkdir copy
git --work-tree=copy/ checkout .
-```
+----
-After that `copy/` will contain a replica of the code in HEAD. But to checkout a
-specific, we need some extra parameters:
+After that `copy/` will contain a replica of the code in HEAD. But to checkout
+a specific, we need some extra parameters:
-```shell
+[source,shell]
+----
git --work-tree=<dir> checkout <my-commit> -- .
-```
+----
There's an extra `-- .` at the end, which initially looks like we're sending
-Morse signals to git, but we're actually saying to `git-checkout` which
-sub directory of `<my-commit>` we want to look at. Which means we can do
-something like:
+Morse signals to git, but we're actually saying to `git-checkout` which sub
+directory of `<my-commit>` we want to look at. Which means we can do something
+like:
-```shell
+[source,shell]
+----
git --work-tree=<dir> checkout <my-commit> -- src/
-```
+----
And with that `<dir>` will only contain what was inside `src/` at `<commit>`.
After any of those checkouts, you have to `git reset .` to reset your current
staging area back to what it was before the checkout.
+== References
-## References
-
-1. [GIT: Checkout to a specific folder][0] (StackOverflow)
+:so-link: https://stackoverflow.com/a/16493707
-[0]: https://stackoverflow.com/a/16493707
+. {so-link}[GIT: Checkout to a specific folder] (StackOverflow)
diff --git a/src/content/tils/2020/08/16/git-search.adoc b/src/content/tils/2020/08/16/git-search.adoc
index f3ae6f0..5ad0eae 100644
--- a/src/content/tils/2020/08/16/git-search.adoc
+++ b/src/content/tils/2020/08/16/git-search.adoc
@@ -1,59 +1,48 @@
----
-
-title: Search in git
-
-date: 2020-08-16
-
-layout: post
-
-lang: en
-
-ref: search-in-git
-
-eu_categories: git
-
----
+= Search in git
Here's a useful trio to know about to help you search things in git:
-1. `git show <commit>`
-2. `git log --grep='<regexp>'`
-3. `git grep '<regexp>' [commit]`
+. `git show <commit>`
+. `git log --grep='<regexp>'`
+. `git grep '<regexp>' [commit]`
-## 1. `git show <commit>`
+== 1. `git show <commit>`
Show a specific commit and it's diff:
-```shell
+[source,shell]
+----
git show
# shows the latest commit
git show <commit>
# shows an specific <commit>
git show v1.2
# shows commit tagged with v1.2
-```
+----
-## 2. `git log --grep='<regexp>'`
+== 2. `git log --grep='<regexp>'`
Search through the commit messages:
-```shell
+[source,shell]
+----
git log --grep='refactor'
-```
+----
-## 3. `git grep '<regexp>' [commit]`
+== 3. `git grep '<regexp>' [commit]`
+
+:browse-article: link:../14/browse-git.html
Search content in git history:
-```shell
+[source,shell]
+----
git grep 'TODO'
# search the repository for the "TODO" string
git grep 'TODO' $(git rev-list --all)
# search the whole history for "TODO" string
-```
+----
And if you find an occurrence of the regexp in a specific commit and you want to
-browse the repository in that point in time, you can
-[use git checkout for that][0].
-
-[0]: {% link _tils/2020-08-14-browse-a-git-repository-at-a-specific-commit.md %}
+browse the repository in that point in time, you can {browse-article}[use git
+checkout for that].
diff --git a/src/content/tils/2020/08/28/grep-online.adoc b/src/content/tils/2020/08/28/grep-online.adoc
index 8b3b63f..cade066 100644
--- a/src/content/tils/2020/08/28/grep-online.adoc
+++ b/src/content/tils/2020/08/28/grep-online.adoc
@@ -1,31 +1,19 @@
----
+= Grep online repositories
-title: Grep online repositories
-
-date: 2020-08-28
-
-layout: post
-
-lang: en
-
-ref: grep-online-repositories
-
-eu_categories: git
-
----
+:cgit: https://git.zx2c4.com/cgit/
I often find interesting source code repositories online that I want to grep for
some pattern but I can't, because either:
-- the repository is on [cgit][cgit] or a similar code repository that doesn't
- allow search in files, or;
-- the search function is really bad, and doesn't allow me to use regular expressions for searching patterns in the code.
-
-[cgit]: https://git.zx2c4.com/cgit/
+* the repository is on {cgit}[cgit] or a similar code repository that doesn't
+ allow search in files, or;
+* the search function is really bad, and doesn't allow me to use regular
+ expressions for searching patterns in the code.
Here's a simple script that allows you to overcome that problem easily:
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -eu
@@ -66,7 +54,7 @@ pushd "/tmp/git-search/${DIRNAME}"
shift 3 || shift 2 # when "--" is missing
git grep "${REGEX_PATTERN}" "${@}"
-```
+----
It is a wrapper around `git grep` that downloads the repository when missing.
Save in a file called `git-search`, make the file executable and add it to your
@@ -74,33 +62,35 @@ path.
Overview:
-- *lines 1~2*:
-
- Bash shebang and the `set -eu` options to exit on error or undefined
- variables.
+* _lines 1~2_:
++
+Bash shebang and the `set -eu` options to exit on error or undefined
+variables.
-- *lines 4~30*:
+* _lines 4~30_:
++
+Usage text to be printed when providing less arguments than expected.
- Usage text to be printed when providing less arguments than expected.
+* _line 33_:
++
+Extract the repository name from the URL, removing trailing slashes.
-- *line 33*:
+* _lines 34~37_:
++
+Download the repository when missing and go to the folder.
- Extract the repository name from the URL, removing trailing slashes.
+* _line 39_:
++
+Make the variable `$@` contain the rest of the unused arguments.
-- *lines 34~37*:
-
- Download the repository when missing and go to the folder.
-
-- *line 39*:
-
- Make the variable `$@` contain the rest of the unused arguments.
-
-- *line 40*:
-
- Perform `git grep`, forwarding the remaining arguments from `$@`.
+* _line 40_:
++
+Perform `git grep`, forwarding the remaining arguments from `$@`.
Example output:
-```shell
+
+[source,shell]
+----
$ git search 'make get-git' https://git.zx2c4.com/cgit/
Clonage dans '/tmp/git-search/cgit'...
remote: Enumerating objects: 542, done.
@@ -116,12 +106,15 @@ README: $ make get-git
$ git search 'make get-git' https://git.zx2c4.com/cgit/
/tmp/git-search/cgit ~/dev/libre/songbooks/docs
README: $ make get-git
-```
+----
-Subsequent greps on the same repository are faster because no download is needed.
+Subsequent greps on the same repository are faster because no download is
+needed.
When no argument is provided, it prints the usage text:
-```shell
+
+[source,shell]
+----
$ git search
Missing argument REGEX_PATTERN.
@@ -136,4 +129,4 @@ Examples:
Searching "make get-git" in cgit repository:
git search 'make get-git' https://git.zx2c4.com/cgit/
git search 'make get-git' https://git.zx2c4.com/cgit/ -- $(git rev-list --all)
-```
+----
diff --git a/src/content/tils/2020/09/04/email-cli-fun-profit.adoc b/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
index 320f3ab..5476fac 100644
--- a/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
+++ b/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
@@ -1,25 +1,22 @@
----
-title: Send emails using the command line for fun and profit!
-date: 2020-09-04
-layout: post
-lang: en
-ref: send-emails-using-the-command-line-for-fun-and-profit
----
-Here are a few reasons why:
+= Send emails using the command line for fun and profit!
-1. send yourself and other people notification of cronjobs, scripts runs, CI
- jobs, *etc.*
+:ssmtp: https://wiki.archlinux.org/index.php/SSMTP
+:mailutils: https://mailutils.org/
-2. leverage the POSIX pipe `|`, and pipe emails away!
+Here are a few reasons why:
-3. because you can.
+. send yourself and other people notification of cronjobs, scripts runs, CI
+ jobs, _etc._
+. leverage the POSIX pipe `|`, and pipe emails away!
+. because you can.
Reason 3 is the fun part, reasons 1 and 2 are the profit part.
-First [install and configure SSMTP][ssmtp] for using, say, Gmail as the email
+First {ssmpt}[install and configure SSMTP] for using, say, Gmail as the email
server:
-```shell
+[source,shell]
+----
# file /etc/ssmtp/ssmtp.conf
FromLineOverride=YES
MailHub=smtp.gmail.com:587
@@ -29,19 +26,21 @@ rewriteDomain=gmail.com
root=username@gmail.com
AuthUser=username
AuthPass=password
-```
+----
-Now install [GNU Mailutils][gnu-mailutils] (`sudo apt-get install mailutils` or the
+Now install {mailutils}[GNU Mailutils] (`sudo apt-get install mailutils` or the
equivalent on your OS), and send yourself your first email:
-```shell
+[source,shell]
+----
echo body | mail -aFrom:email@example.com email@example.com -s subject
-```
+----
-And that's about it, you've got mail. Here are some more places where it might
+And that's about it, you've got mail. Here are some more places where it might
be applicable:
-```shell
+[source,shell]
+----
# report a backup cronjob, attaching logs
set -e
@@ -61,20 +60,16 @@ finish() {
trap finish EXIT
do-long-backup-cmd-here
-```
+----
-```
+....
# share the output of a cmd with someone
some-program | mail someone@example.com -s "The weird logs that I was talking about"
-```
+....
...and so on.
You may consider adding a `alias mail='mail -aFrom:email@example.com'` so you
-don't keep re-entering the "From: " part.
+don't keep re-entering the ``From:'' part.
Send yourself some emails to see it working!
-
-[ssmtp]: https://wiki.archlinux.org/index.php/SSMTP
-[gnu-mailutils]: https://mailutils.org/
-[forwarding-wiki-section]: https://wiki.archlinux.org/index.php/SSMTP#Forward_to_a_Gmail_mail_server
diff --git a/src/content/tils/2020/09/05/oldschool-pr.adoc b/src/content/tils/2020/09/05/oldschool-pr.adoc
index 5b4e445..c1c56b7 100644
--- a/src/content/tils/2020/09/05/oldschool-pr.adoc
+++ b/src/content/tils/2020/09/05/oldschool-pr.adoc
@@ -1,43 +1,32 @@
----
+= Pull requests with Git, the old school way
-title: Pull requests with Git, the old school way
+:empty:
+:cgit: https://git.zx2c4.com/cgit/
-date: 2020-09-05
-
-layout: post
-
-lang: en
-
-ref: pull-requests-with-git-the-old-school-way
-
-eu_categories: git
-
----
It might be news to you, as it was to me, that "pull requests" that you can
-create on a Git hosting provider's web UI[^pr-webui] like
-GitLab/Bitbucket/GitHub actually comes from Git itself: `git request-pull`.
-
-[^pr-webui]: And maybe even using the Git hosting provider's API from the
- command line!
+create on a Git hosting provider’s web
+UI{empty}footnote:pr-webui[
+ And maybe even using the Git hosting provider’s API from the command line!
+] like GitLab/Bitbucket/GitHub actually comes from Git itself:
+`git request-pull`.
At the very core, they accomplish the same thing: both the original and the web
UI ones are ways for you to request the project maintainers to pull in your
-changes from your fork. It's like saying: "hi there, I did some changes on my
+changes from your fork. It’s like saying: "hi there, I did some changes on my
clone of the repository, what do you think about bringing those in?".
-The only difference is that you're working with only Git itself, so you're not
+The only difference is that you’re working with only Git itself, so you’re not
tied to any Git hosting provider: you can send pull requests across them
-transparently! You could even use your own [cgit][cgit] installation. No need to
-be locked in by any of them, putting the "D" back in "DVCS": it's a
-**distributed** version control system.
-
-[cgit]: https://git.zx2c4.com/cgit/
+transparently! You could even use your own {cgit}[cgit] installation. No need
+to be locked in by any of them, putting the "D" back in "DVCS": it’s a
+*distributed* version control system.
-## `git request-pull` introduction
+== `git request-pull` introduction
-Here's the raw output of a `git request-pull`:
+Here’s the raw output of a `git request-pull`:
-```shell
+[source,shell]
+----
$ git request-pull HEAD public-origin
The following changes since commit 302c9f2f035c0360acd4e13142428c100a10d43f:
@@ -57,35 +46,39 @@ EuAndreh (1):
_posts/2020-08-31-the-database-i-wish-i-had.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
-```
+----
That very first line is saying: "create me a pull request with only a single
commit, defined by `HEAD`, and use the URL defined by `public-origin`".
-Here's a pitfall: you may try using your `origin` remote at first where I put
+Here’s a pitfall: you may try using your `origin` remote at first where I put
`public-origin`, but that is many times pointing to something like
`git@example.com`, or `git.example.com:repo.git` (check that with
-`git remote -v | grep origin`). On both cases those are addresses available for
+`git remote -v | grep origin`). On both cases those are addresses available for
interaction via SSH, and it would be better if your pull requests used an
address ready for public consumption.
A simple solution for that is for you to add the `public-origin` alias as the
HTTPS alternative to the SSH version:
-```shell
+[source,shell]
+----
$ git remote add public-origin https://example.com/user/repo
-```
+----
Every Git hosting provider exposes repositories via HTTPS.
Experiment it yourself, and get acquainted with the CLI.
-## Delivering decentralized pull requests
+== Delivering decentralized pull requests
+
+:cli-email: link:../04/cli-email-fun-profit.html
Now that you can create the content of a pull request, you can just
-[deliver it][cli-email] to the interested parties email:
+{cli-email}[deliver it] to the interested parties email:
-```shell
+[source,shell]
+----
# send a PR with your last commit to the author's email
git request-pull HEAD public-origin | mail author@example.com -s "PR: Add thing to repo"
@@ -97,14 +90,14 @@ git request-pull -p HEAD~5 public-origin | \
# send every commit that is new in "other-branch"
git request-pull master public-origin other-branch | \
mail list@example.com -s 'PR: All commits from my "other-brach"'
-```
+----
-[cli-email]: {% link _tils/2020-09-04-send-emails-using-the-command-line-for-fun-and-profit.md %}
+== Conclusion
-## Conclusion
+:distgit: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
-In practice, I've never used or seen anyone use pull requests this way:
-everybody is just [sending patches via email][decentralized-git].
+In practice, I’ve never used or seen anyone use pull requests this way:
+everybody is just {distgit}[sending patches via email].
If you stop to think about this model, the problem of "Git hosting providers
becoming too centralized" is a non-issue, and "Git federation" proposals are a
@@ -114,5 +107,3 @@ Using Git this way is not scary or so weird as the first impression may suggest.
It is actually how Git was designed to be used.
Check `git help request-pull` for more info.
-
-[decentralized-git]: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
diff --git a/src/content/tils/2020/10/11/search-git-history.adoc b/src/content/tils/2020/10/11/search-git-history.adoc
index 251abe9..b6af8f9 100644
--- a/src/content/tils/2020/10/11/search-git-history.adoc
+++ b/src/content/tils/2020/10/11/search-git-history.adoc
@@ -1,41 +1,28 @@
----
+= Search changes to a filename pattern in Git history
-title: Search changes to a filename pattern in Git history
+:yet: link:../../08/14/browse-git.html
+:another: link:../../08/16/git-search.html
+:entry: link:../../08/28/grep-online.html
-date: 2020-10-11
+This is {yet}[yet] {another}[another] {entry}["search in Git"] TIL entry. You
+could say that Git has a unintuitive CLI, or that is it very powerful.
-layout: post
+I wanted to search for an old file that I new that was in the history of the
+repository, but was deleted some time ago. So I didn't really remember the
+name, only bits of it.
-lang: en
+I immediately went to the list of TILs I had written on searching in Git, but it
+wasn't readily obvious how to do it, so here it goes:
-ref: search-changes-to-a-filename-pattern-in-git-history
-
-eu_categories: git
-
----
-
-This is [yet][git-til-1] [another][git-til-2] ["search in Git"][git-til-3] TIL
-entry. You could say that Git has a unintuitive CLI, or that is it very
-powerful.
-
-I wanted to search for an old file that I new that was in the
-history of the repository, but was deleted some time ago. So I didn't really
-remember the name, only bits of it.
-
-I immediately went to the list of TILs I had written on searching in Git, but
-it wasn't readily obvious how to do it, so here it goes:
-
-```shell
+[source,shell]
+----
git log -- *pattern*
-```
+----
You could add globs before the pattern to match things on any directory, and add
our `-p` friend to promptly see the diffs:
-```shell
+[source,shell]
+----
git log -p -- **/*pattern*
-```
-
-[git-til-1]: {% link _tils/2020-08-14-browse-a-git-repository-at-a-specific-commit.md %}
-[git-til-2]: {% link _tils/2020-08-16-search-in-git.md %}
-[git-til-3]: {% link _tils/2020-08-28-grep-online-repositories.md %}
+----
diff --git a/src/content/tils/2020/11/08/find-broken-symlink.adoc b/src/content/tils/2020/11/08/find-broken-symlink.adoc
index bc97fc6..d8f6b20 100644
--- a/src/content/tils/2020/11/08/find-broken-symlink.adoc
+++ b/src/content/tils/2020/11/08/find-broken-symlink.adoc
@@ -1,36 +1,24 @@
----
+= Find broken symlinks with "find"
-title: Find broken symlinks with "find"
-
-date: 2020-11-08
-
-layout: post
-
-lang: en
-
-ref: find-broken-symlinks-with-find
-
-eu_categories: shell
-
----
+:annex: https://git-annex.branchable.com/
+:annex-wanted: https://git-annex.branchable.com/git-annex-wanted/
The `find` command knows how to show broken symlinks:
-```shell
+[source,shell]
+----
find . -xtype l
-```
+----
-This was useful to me when combined with [Git Annex][git-annex]. Its
-[`wanted`][git-annex-wanted] option allows you to have a "sparse" checkout of
-the content, and save space by not having to copy every annexed file locally:
+This was useful to me when combined with {annex}[Git Annex]. Its
+{annex-wanted}[`wanted`] option allows you to have a "sparse" checkout of the
+content, and save space by not having to copy every annexed file locally:
-```shell
+[source,shell]
+----
git annex wanted . 'exclude=Music/* and exclude=Videos/*'
-```
+----
You can `find` any broken symlinks outside those directories by querying with
Git Annex itself, but `find . -xtype l` works on other places too, where broken
symlinks might be a problem.
-
-[git-annex]: https://git-annex.branchable.com/
-[git-annex-wanted]: https://git-annex.branchable.com/git-annex-wanted/
diff --git a/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc b/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
index 3336482..73919d5 100644
--- a/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
+++ b/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
@@ -1,27 +1,17 @@
----
+= DIY bare bones CI server with Bash and Nix
-title: DIY bare bones CI server with Bash and Nix
-
-date: 2020-11-12 3
-
-layout: post
-
-lang: en
-
-ref: diy-bare-bones-ci-server-with-bash-and-nix
-
-eu_categories: ci
-
----
+:post-receive: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
+:example-project: https://euandreh.xyz/remembering/ci.html
With a server with Nix installed (no need for NixOS), you can leverage its build
-isolation for running CI jobs by adding a [post-receive][post-receive] Git hook
+isolation for running CI jobs by adding a {post-receive}[post-receive] Git hook
to the server.
In most of my project I like to keep a `test` attribute which runs the test with
-`nix-build -A test`. This way, a post-receive hook could look like:
+`nix-build -A test`. This way, a post-receive hook could look like:
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -Eeuo pipefail
set -x
@@ -42,33 +32,30 @@ finish() {
trap finish EXIT
nix-build -A test
-```
+----
-We initially (lines #5 to #8) create a log file, named after *when* the run is
-running and for *which* commit it is running for. The `exec` and `tee` combo
-allows the output of the script to go both to `stdout` *and* the log file. This
+We initially (lines #5 to #8) create a log file, named after _when_ the run is
+running and for _which_ commit it is running for. The `exec` and `tee` combo
+allows the output of the script to go both to `stdout` _and_ the log file. This
makes the logs output show up when you do a `git push`.
Lines #10 to #13 create a fresh clone of the repository and line #20 runs the
test command.
After using a similar post-receive hook for a while, I now even generate a
-simple HTML file to make the logs available ([example project][ci-logs])
+simple HTML file to make the logs available ({example-project}[example project])
through the browser.
-[post-receive]: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
-[ci-logs]: https://euandreh.xyz/remembering/ci.html
-
-## Upsides
+== Upsides
No vendor lock-in, as all you need is a server with Nix installed.
And if you pin the Nixpkgs version you're using, this very simple setup yields
extremely sandboxed runs on a very hermetic environment.
-## Downsides
+== Downsides
Besides the many missing shiny features of this very simplistic CI, `nix-build`
-can be very resource intensive. Specifically, it consumes too much memory. So if
-it has to download too many things, or the build closure gets too big, the
+can be very resource intensive. Specifically, it consumes too much memory. So
+if it has to download too many things, or the build closure gets too big, the
server might very well run out of memory.
diff --git a/src/content/tils/2020/11/12/git-bisect-automation.adoc b/src/content/tils/2020/11/12/git-bisect-automation.adoc
index 9c34b2a..c70bb2d 100644
--- a/src/content/tils/2020/11/12/git-bisect-automation.adoc
+++ b/src/content/tils/2020/11/12/git-bisect-automation.adoc
@@ -1,18 +1,4 @@
----
-
-title: Git bisect automation
-
-date: 2020-11-12 2
-
-layout: post
-
-lang: en
-
-ref: git-bisect-automation
-
-eu_categories: git
-
----
+= Git bisect automation
It is good to have an standardized way to run builds and tests on the repository
of a project, so that you can find when a bug was introduced by using
@@ -22,14 +8,15 @@ I've already been in the situation when a bug was introduced and I didn't know
how it even was occurring, and running Git bisect over hundreds of commits to
pinpoint the failing commit was very empowering:
-```
+....
$ GOOD_COMMIT_SHA=e1fd0a817d192c5a5df72dd7422e36558fa78e46
$ git bisect start HEAD $GOOD_COMMIT_SHA
$ git bisect run sn -c './build.sh && ./run-failing-case.sh'
-```
+....
Git will than do a binary search between the commits, and run the commands you
provide it with to find the failing commit.
Instead of being afraid of doing a bisect, you should instead leverage it, and
-make Git help you dig through the history of the repository to find the bad code.
+make Git help you dig through the history of the repository to find the bad
+code.
diff --git a/src/content/tils/2020/11/12/useful-bashvars.adoc b/src/content/tils/2020/11/12/useful-bashvars.adoc
index 33a072e..5061b64 100644
--- a/src/content/tils/2020/11/12/useful-bashvars.adoc
+++ b/src/content/tils/2020/11/12/useful-bashvars.adoc
@@ -1,59 +1,49 @@
----
+= Useful Bash variables
-title: Useful Bash variables
+:bash: https://www.gnu.org/software/bash/
+:bash-bang-bang: https://www.gnu.org/software/bash/manual/bash.html#Event-Designators
+:bash-dollar-underscore: https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters
-date: 2020-11-12 1
+{bash}[GNU Bash] has a few two letter variables that may be useful when typing
+on the terminal.
-layout: post
+== `!!`: the text of the last command
-lang: en
-
-ref: useful-bash-variables
-
-eu_categories: shell
-
----
-
-[GNU Bash][gnu-bash] has a few two letter variables that may be useful when
-typing on the terminal.
-
-[gnu-bash]: https://www.gnu.org/software/bash/
-
-## `!!`: the text of the last command
-
-The [`!!` variable][previous-command] refers to the previous command, and I find
+The {bash-bang-bang}[`!!` variable] refers to the previous command, and I find
useful when following chains for symlinks:
-[previous-command]: https://www.gnu.org/software/bash/manual/bash.html#Event-Designators
-
-```shell
+[source,shell]
+----
$ which git
/run/current-system/sw/bin/git
$ readlink $(!!)
readlink $(which git)
/nix/store/5bgr1xpm4m0r72h9049jbbhagxdyrnyb-git-2.28.0/bin/git
-```
+----
It is also useful when you forget to prefix `sudo` to a command that requires
it:
-```shell
+[source,shell]
+----
$ requires-sudo.sh
requires-sudo.sh: Permission denied
$ sudo !!
sudo ./requires-sudo.sh
# all good
-```
+----
Bash prints the command expansion before executing it, so it is better for you
to follow along what it is doing.
-## `$_`: most recent parameter
+== `$_`: most recent parameter
-The [`$_` variable][recent-parameter] will give you the most recent parameter
-you provided to a previous argument, which can save you typing sometimes:
+The {bash-dollar-underscore}[`$_` variable] will give you the most recent
+parameter you provided to a previous argument, which can save you typing
+sometimes:
-```shell
+[source,shell]
+----
# instead of...
$ mkdir -p a/b/c/d/
$ cd a/b/c/d/
@@ -61,11 +51,9 @@ $ cd a/b/c/d/
# ...you can:
$ mkdir -p a/b/c/d/
$ cd $_
-```
-
-[recent-parameter]: https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters
+----
-## Conclusion
+== Conclusion
I wouldn't use those in a script, as it would make the script terser to read, I
find those useful shortcut that are handy when writing at the interactive
diff --git a/src/content/tils/2020/11/14/gpodder-media.adoc b/src/content/tils/2020/11/14/gpodder-media.adoc
index a74b225..f722f35 100644
--- a/src/content/tils/2020/11/14/gpodder-media.adoc
+++ b/src/content/tils/2020/11/14/gpodder-media.adoc
@@ -1,24 +1,15 @@
----
+= gPodder as a media subscription manager
-title: gPodder as a media subscription manager
+:re-discover: https://www.charlieharrington.com/unexpected-useless-and-urgent
+:gpodder: https://gpodder.github.io/
-date: 2020-11-14
+As we {re-discover}[re-discover] the value of Atom/RSS feeds, most useful feed
+clients I know of don't support media, specifically audio and video.
-layout: post
+{gpodder}[gPodder] does.
-lang: en
-
-ref: gpodder-as-a-media-subscription-manager
-
----
-
-As we [re-discover][rss] the value of Atom/RSS feeds, most useful feed clients I
-know of don't support media, specifically audio and video.
-
-[gPodder][gpodder] does.
-
-It is mostly know as a desktop podcatcher. But the thing about podcasts is that
-the feed is provided through an RSS/Atom feed. So you can just use gPodder as
+It is mostly know as a desktop podcatcher. But the thing about podcasts is that
+the feed is provided through an RSS/Atom feed. So you can just use gPodder as
your media feed client, where you have control of what you look at.
I audio and video providers I know of offer an RSS/Atom view of their content,
@@ -26,8 +17,5 @@ so you can, say, treat any YouTube channel like a feed on its own.
gPodder will then managed your feeds, watched/unwatched, queue downloads, etc.
-Being obvious now, it was a big finding for me. If it got you interested, I
+Being obvious now, it was a big finding for me. If it got you interested, I
recommend you giving gPodder a try.
-
-[rss]: https://www.charlieharrington.com/unexpected-useless-and-urgent
-[gpodder]: https://gpodder.github.io/
diff --git a/src/content/tils/2020/11/30/git-notes-ci.adoc b/src/content/tils/2020/11/30/git-notes-ci.adoc
index f8dd063..bfce42a 100644
--- a/src/content/tils/2020/11/30/git-notes-ci.adoc
+++ b/src/content/tils/2020/11/30/git-notes-ci.adoc
@@ -1,28 +1,19 @@
----
+= Storing CI data on Git notes
-title: Storing CI data on Git notes
+:git-notes: https://git-scm.com/docs/git-notes
+:nix-bash-ci: link:../12/diy-nix-bash-ci.html
+:cgit: https://git.zx2c4.com/cgit/
-date: 2020-11-30
+Extending the bare bones CI server I've {nix-bash-ci}[talked about before],
+divoplade on Freenode suggested storing CI artifacts on {git-notes}[Git notes],
+such as tarballs, binaries, logs, _etc_.
-layout: post
+I've written a small script that will put log files and CI job data on Git
+notes, and make it visible on the porcelain log. It is a simple extension of
+the previous article:
-lang: en
-
-ref: storing-ci-data-on-git-notes
-
-eu_categories: git,ci
-
----
-
-Extending the bare bones CI server I've [talked about before][previous-article],
-divoplade on Freenode suggested storing CI artifacts on [Git notes][git-notes],
-such as tarballs, binaries, logs, *etc*.
-
-I've written a small script that will put log files and CI job data on Git notes,
-and make it visible on the porcelain log. It is a simple extension of the
-previous article:
-
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -Eeuo pipefail
set -x
@@ -63,16 +54,17 @@ git config --global user.name 'EuAndreh CI'
./container make check site
./container make publish
-```
+----
-The important part is in the `finish()` function:
-- #25 stores the exit status and the generated filename separated by spaces;
-- #26 adds the log file in a note using the `refs/notes/ci-logs` ref;
-- #27 it adds a note to the commit saying how to see the logs.
+The important part is in the `finish()` function: - #25 stores the exit status
+and the generated filename separated by spaces; - #26 adds the log file in a
+note using the `refs/notes/ci-logs` ref; - #27 it adds a note to the commit
+saying how to see the logs.
A commit now has an attached note, and shows it whenever you look at it:
-```diff
+[source,diff]
+----
$ git show 87c57133abd8be5d7cc46afbf107f59b26066575
commit 87c57133abd8be5d7cc46afbf107f59b26066575
Author: EuAndreh <eu@euandre.org>
@@ -100,23 +92,20 @@ index d1830ca..a4ccde7 100644
(service dhcp-client-service-type)
#;
(service opensmtpd-service-type
-```
+----
-Other tools such as [cgit][cgit] will also show notes on the web interface:
-<https://euandre.org/git/servers/commit?id=87c57133abd8be5d7cc46afbf107f59b26066575>.
+Other tools such as {cgit}[cgit] will also show notes on the web interface:
+https://euandre.org/git/servers/commit?id=87c57133abd8be5d7cc46afbf107f59b26066575.
You can go even further: since cgit can serve raw blob directly, you can even
serve such artifacts (log files, release artifacts, binaries) from cgit itself:
-```shell
+[source,shell]
+----
$ SHA="$(git notes --ref=refs/notes/ci-logs list 87c57133abd8be5d7cc46afbf107f59b26066575)"
$ echo "https://euandre.org/git/servers/blob?id=$SHA"
https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c
-```
+----
And like that you'll have cgit serving the artifacts for you:
-<https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c>.
-
-[previous-article]: {% link _tils/2020-11-12-diy-bare-bones-ci-server-with-bash-and-nix.md %}
-[git-notes]: https://git-scm.com/docs/git-notes
-[cgit]: https://git.zx2c4.com/cgit/
+https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c.
diff --git a/src/content/tils/2020/12/15/shellcheck-repo.adoc b/src/content/tils/2020/12/15/shellcheck-repo.adoc
index 71d10a3..960812e 100644
--- a/src/content/tils/2020/12/15/shellcheck-repo.adoc
+++ b/src/content/tils/2020/12/15/shellcheck-repo.adoc
@@ -1,52 +1,41 @@
----
+= Awk snippet: ShellCheck all scripts in a repository
-title: 'Awk snippet: ShellCheck all scripts in a repository'
+:awk-20-min: https://ferd.ca/awk-in-20-minutes.html
+:shellcheck: https://www.shellcheck.net/
-date: 2020-12-15
-
-updated_at: 2020-12-16
-
-layout: post
-
-lang: en
-
-ref: awk-snippet-shellcheck-all-scripts-in-a-repository
-
-eu_categories: shell
-
----
-
-Inspired by Fred Herbert's "[Awk in 20 Minutes][awk-20min]", here's a problem I
+Inspired by Fred Herbert's "{awk-20-min}[Awk in 20 Minutes]", here's a problem I
just solved with a line of Awk: run ShellCheck in all scripts of a repository.
In my repositories I usually have Bash and POSIX scripts, which I want to keep
-tidy with [ShellCheck][shellcheck]. Here's the first version of
+tidy with {shellcheck}[ShellCheck]. Here's the first version of
`assert-shellcheck.sh`:
-```shell
+[source,shell]
+----
#!/bin/sh -eux
find . -type f -name '*.sh' -print0 | xargs -0 shellcheck
-```
+----
This is the type of script that I copy around to all repositories, and I want it
to be capable of working on any repository, without requiring a list of files to
run ShellCheck on.
-This first version worked fine, as all my scripts had the '.sh' ending. But I
+This first version worked fine, as all my scripts had the `.sh' ending. But I
recently added some scripts without any extension, so `assert-shellcheck.sh`
-called for a second version. The first attempt was to try grepping the shebang
-line:
+called for a second version. The first attempt was to try grepping the shebang line:
-```shell
+[source,shell]
+----
$ grep '^#!/' assert-shellcheck.sh
#!/usr/sh
-```
+----
-Good, we have a grep pattern on the first try. Let's try to find all the
+Good, we have a grep pattern on the first try. Let's try to find all the
matching files:
-```shell
+[source,shell]
+----
$ find . -type f | xargs grep -l '^#!/'
./TODOs.org
./.git/hooks/pre-commit.sample
@@ -76,7 +65,7 @@ $ find . -type f | xargs grep -l '^#!/'
./scripts/songbooks.in
./scripts/with-container.sh
./scripts/assert-shellcheck.sh
-```
+----
This approach has a problem, though: it includes files ignored by Git, such as
`builld-aux/install-sh~`, and even goes into the `.git/` directory and finds
@@ -84,7 +73,8 @@ sample hooks in `.git/hooks/*`.
To list the files that Git is tracking we'll try `git ls-files`:
-```shell
+[source,shell]
+----
$ git ls-files | xargs grep -l '^#!/'
TODOs.org
bootstrap
@@ -99,23 +89,25 @@ scripts/compile-readme.sh
scripts/generate-tasks-and-bugs.sh
scripts/songbooks.in
scripts/with-container.sh
-```
+----
It looks to be almost there, but the `TODOs.org` entry shows a flaw in it: grep
-is looking for a `'^#!/'` pattern on any part of the file. In my case,
+is looking for a +'^#!/'+ pattern on any part of the file. In my case,
`TODOs.org` had a snippet in the middle of the file where a line started with
-`#!/bin/sh`.
++#!/bin/sh+.
-So what we actually want is to match the **first** line against the pattern. We
+So what we actually want is to match the *first* line against the pattern. We
could loop through each file, get the first line with `head -n 1` and grep
-against that, but this is starting to look messy. I bet there is another way of
+against that, but this is starting to look messy. I bet there is another way of
doing it concisely...
-Let's try Awk. I need a way to select the line numbers to replace `head -n 1`,
-and to stop processing the file if the pattern matches. A quick search points me
-to using `FNR` for the former, and `{ nextline }` for the latter. Let's try it:
+Let's try Awk. I need a way to select the line numbers to replace `head -n 1`,
+and to stop processing the file if the pattern matches. A quick search points
+me to using `FNR` for the former, and `{ nextline }` for the latter. Let's try
+it:
-```shell
+[source,shell]
+----
$ git ls-files | xargs awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }'
bootstrap
build-aux/with-guile-env.in
@@ -129,43 +121,41 @@ scripts/compile-readme.sh
scripts/generate-tasks-and-bugs.sh
scripts/songbooks.in
scripts/with-container.sh
-```
+----
Great! Only `TODOs.org` is missing, but the script is much better: instead of
matching against any part of the file that may have a shebang-like line, we only
-look for the first. Let's put it back into the `assert-shellcheck.sh` file and
+look for the first. Let's put it back into the `assert-shellcheck.sh` file and
use `NULL` for separators to accommodate files with spaces in the name:
-```
+....
#!/usr/sh -eux
git ls-files -z | \
xargs -0 awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }' | \
xargs shellcheck
-```
+....
This is where I've stopped, but I imagine a likely improvement: match against
-only `#!/bin/sh` and `#!/usr/bin/env bash` shebangs (the ones I use most), to
+only +#!/bin/sh+ and +#!/usr/bin/env bash+ shebangs (the ones I use most), to
avoid running ShellCheck on Perl files, or other shebangs.
Also when reviewing the text of this article, I found that `{ nextfile }` is a
-GNU Awk extension. It would be an improvement if `assert-shellcheck.sh` relied
+GNU Awk extension. It would be an improvement if `assert-shellcheck.sh` relied
on the POSIX subset of Awk for working correctly.
-## *Update*
+== _Update_
After publishing, I could remove `{ nextfile }` and even make the script
simpler:
-```shell
+[source,shell]
+----
#!/usr/sh -eux
git ls-files -z | \
xargs -0 awk 'FNR==1 && /^#!\// { print FILENAME }' | \
xargs shellcheck
-```
+----
Now both the shell and Awk usage are POSIX compatible.
-
-[awk-20min]: https://ferd.ca/awk-in-20-minutes.html
-[shellcheck]: https://www.shellcheck.net/
diff --git a/src/content/tils/2020/12/29/svg.adoc b/src/content/tils/2020/12/29/svg.adoc
index 54cca9a..2022c83 100644
--- a/src/content/tils/2020/12/29/svg.adoc
+++ b/src/content/tils/2020/12/29/svg.adoc
@@ -1,32 +1,21 @@
----
+= SVG favicon
-title: SVG favicon
-
-date: 2020-12-29
-
-updated_at: 2021-01-12
-
-layout: post
-
-lang: en
-
-ref: svg-favicon
-
----
+:favicon: link:../../../../img/favicon.svg
I've wanted to change this website's favicon from a plain `.ico` file to a
-proper SVG. The problem I was trying to solve was to reuse the same image on
+proper SVG. The problem I was trying to solve was to reuse the same image on
other places, such as avatars.
Generating a PNG from the existing 16x16 icon was possible but bad: the final
-image was blurry. Converting the `.ico` to an SVG was possible, but sub-optimal:
-tools try to guess some vector paths, and the final SVG didn't match the
-original.
+image was blurry. Converting the `.ico` to an SVG was possible, but
+sub-optimal: tools try to guess some vector paths, and the final SVG didn't
+match the original.
Instead I used a tool to draw the "vector pixels" as black squares, and after
getting the final result I manually cleaned-up the generated XML:
-```xml
+[source,xml]
+----
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16">
<path d="M 0 8 L 1 8 L 1 9 L 0 9 L 0 8 Z" />
@@ -89,27 +78,28 @@ getting the final result I manually cleaned-up the generated XML:
<path d="M 14 11 L 15 11 L 15 12 L 14 12 L 14 11 Z" />
<path d="M 14 12 L 15 12 L 15 13 L 14 13 L 14 12 Z" />
</svg>
-```
+----
-The good thing about this new favicon
-(at [`/static/lord-favicon.svg`](/static/lord-favicon.svg)) is that
-a) it is simple enough that I feel
-comfortable editing it manually and b) it is an SVG, which means I can generate
-any desired size.
+The good thing about this new favicon (at {favicon}[`/static/lord-favicon.svg`])
+is that a) it is simple enough that I feel comfortable editing it manually and
+b) it is an SVG, which means I can generate any desired size.
With the new favicon file, I now had to add to the templates' `<head>` a
`<link>` to this icon:
-```html
+
+[source,html]
+----
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/static/favicon.svg">
...
-```
+----
-Still missing is a bitmap image for places that can't handle vector images. I
+Still missing is a bitmap image for places that can't handle vector images. I
used Jekyll generator to create an PNG from the existing SVG:
-```ruby
+[source,ruby]
+----
module Jekyll
class FaviconGenerator < Generator
safe true
@@ -127,7 +117,7 @@ module Jekyll
end
end
end
-```
+----
I had to increase the priority of the generator so that it would run before
other places that would use a `{% link /static/lord-favicon.png %}`, otherwise
diff --git a/src/content/tils/2021/01/12/curl-awk-emails.adoc b/src/content/tils/2021/01/12/curl-awk-emails.adoc
index 880ddf1..875c655 100644
--- a/src/content/tils/2021/01/12/curl-awk-emails.adoc
+++ b/src/content/tils/2021/01/12/curl-awk-emails.adoc
@@ -1,28 +1,25 @@
----
+= Awk snippet: send email to multiple recipients with cURL
-title: 'Awk snippet: send email to multiple recipients with cURL'
+:neomutt: https://neomutt.org/
+:found-out-article: https://blog.edmdesigner.com/send-email-from-linux-command-line/
+:curl: https://curl.se/
-date: 2021-01-12
-
-layout: post
-
-lang: en
-
-ref: awk-snippet-send-email-to-multiple-recipients-with-curl
-
----
-
-As I experiment with [Neomutt][neomutt], I wanted to keep being able to enqueue emails for sending later like my previous setup, so that I didn't rely on having an internet connection.
+As I experiment with {neomutt}[Neomutt], I wanted to keep being able to enqueue
+emails for sending later like my previous setup, so that I didn't rely on having
+an internet connection.
My requirements for the `sendmail` command were:
-1. store the email in a file, and send it later.
-1. send from different addresses, using different SMTP servers;
-I couldn't find an MTA that could accomplish that, but I was able to quickly write a solution.
+. store the email in a file, and send it later;
+. send from different addresses, using different SMTP servers.
+
+I couldn't find an MTA that could accomplish that, but I was able to quickly
+write a solution.
The first part was the easiest: store the email in a file:
-```shell
+[source,shell]
+----
# ~/.config/mutt/muttrc:
set sendmail=~/bin/enqueue-email.sh
@@ -30,20 +27,24 @@ set sendmail=~/bin/enqueue-email.sh
#!/bin/sh -eu
cat - > "$HOME/mbsync/my-queued-emails/$(date -Is)"
-```
+----
-Now that I had the email file store locally, I needed a program to send the email from the file, so that I could create a cronjob like:
+Now that I had the email file store locally, I needed a program to send the
+email from the file, so that I could create a cronjob like:
-```shell
+[source,shell]
+----
for f in ~/mbsync/my-queued-emails/*; do
~/bin/dispatch-email.sh "$f" && rm "$f"
done
-```
+----
-The `dispatch-email.sh` would have to look at the `From: ` header and decide which SMTP server to use.
-As I [found out][curl-email] that [curl][curl] supports SMTP and is able to send emails, this is what I ended up with:
+The `dispatch-email.sh` would have to look at the `From:` header and decide
+which SMTP server to use. As I {found-out-article}[found out] that {curl}[curl]
+supports SMTP and is able to send emails, this is what I ended up with:
-```shell
+[source,shell]
+----
#!/bin/sh -eu
F="$1"
@@ -79,24 +80,30 @@ else
echo 'Bad "From: " address'
exit 1
fi
-```
+----
Most of curl flags used are self-explanatory, except for `$rcpt`.
-curl connects to the SMTP server, but doesn't set the recipient address by looking at the message.
-My solution was to generate the curl flags, store them in `$rcpt` and use it unquoted to leverage shell word splitting.
+curl connects to the SMTP server, but doesn't set the recipient address by
+looking at the message. My solution was to generate the curl flags, store them
+in `$rcpt` and use it unquoted to leverage shell word splitting.
-To me, the most interesting part was building the `$rcpt` flags.
-My first instinct was to try grep, but it couldn't print only matches in a regex.
-As I started to turn towards sed, I envisioned needing something else to loop over the sed output, and I then moved to Awk.
+To me, the most interesting part was building the `$rcpt` flags. My first
+instinct was to try grep, but it couldn't print only matches in a regex. As I
+started to turn towards sed, I envisioned needing something else to loop over
+the sed output, and I then moved to Awk.
-In the short Awk snippet, 3 things were new to me: the `match(...)`, `split(...)` and `for () {}`.
-The only other function I have ever used was `gsub(...)`, but these new ones felt similar enough that I could almost guess their behaviour and arguments.
-`match(...)` stores the matches of a regex on the given array positionally, and `split(...)` stores the chunks in the given array.
+In the short Awk snippet, 3 things were new to me: the `match(...)`,
+`split(...)` and `for () {}`. The only other function I have ever used was
+`gsub(...)`, but these new ones felt similar enough that I could almost guess
+their behaviour and arguments. `match(...)` stores the matches of a regex on
+the given array positionally, and `split(...)` stores the chunks in the given
+array.
I even did it incrementally:
-```shell
+[source,shell]
+----
$ H='To: to@example.com, to2@example.com\nCc: cc@example.com, cc2@example.com\nBcc: bcc@example.com,bcc2@example.com\n'
$ printf "$H" | awk '/^To: .*$/ { print $0 }'
To: to@example.com, to2@example.com
@@ -115,28 +122,27 @@ to@example.com,
$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[2] }'
to2@example.com
$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[3] }'
+----
-```
+(This isn't the verbatim interactive session, but a cleaned version to make it
+more readable.)
-(This isn't the verbatim interactive session, but a cleaned version to make it more readable.)
+At this point, I realized I needed a for loop over the `tos` array, and I moved
+the Awk snippet into the `~/bin/dispatch-email.sh`. I liked the final thing:
-At this point, I realized I needed a for loop over the `tos` array, and I moved the Awk snippet into the `~/bin/dispatch-email.sh`.
-I liked the final thing:
-
-```awk
+[source,awk]
+----
match($0, /^(To|Cc|Bcc): (.*)$/, m) {
split(m[2], tos, ",")
for (i in tos) {
print "--mail-rcpt " tos[i]
}
}
-```
+----
-As I learn more about Awk, I feel that it is too undervalued, as many people turn to Perl or other programming languages when Awk suffices.
-The advantage is pretty clear: writing programs that run on any POSIX system, without extra dependencies required.
+As I learn more about Awk, I feel that it is too undervalued, as many people
+turn to Perl or other programming languages when Awk suffices. The advantage is
+pretty clear: writing programs that run on any POSIX system, without extra
+dependencies required.
Coding to the standards is underrated.
-
-[neomutt]: https://neomutt.org/
-[curl-email]: https://blog.edmdesigner.com/send-email-from-linux-command-line/
-[curl]: https://curl.se/
diff --git a/src/content/tils/2021/01/17/posix-shebang.adoc b/src/content/tils/2021/01/17/posix-shebang.adoc
index 5f5b897..4e2fbe8 100644
--- a/src/content/tils/2021/01/17/posix-shebang.adoc
+++ b/src/content/tils/2021/01/17/posix-shebang.adoc
@@ -1,55 +1,58 @@
= POSIX sh and shebangs
-date: 2021-01-17
+:awk-1: link:../../../2020/12/15/shellcheck-repo.html
+:awk-2: link:../12/curl-awk-emails.html
-layout: post
+As I {awk-1}[keep moving] {awk-2}[towards POSIX], I'm on the process of
+migrating all my Bash scripts to POSIX sh.
-lang: en
-
-ref: posix-sh-and-shebangs
-
----
-
-As I [keep moving][posix-awk-0] [towards POSIX][posix-awk-1], I'm on the process of migrating all my Bash scripts to POSIX sh.
-
-As I dropped `[[`, arrays and other Bashisms, I was left staring at the first line of every script, wondering what to do: what is the POSIX sh equivalent of `#!/usr/bin/env bash`?
-I already knew that POSIX says nothing about shebangs, and that the portable way to call a POSIX sh script is `sh script.sh`, but I didn't know what to do with that first line.
+As I dropped `[[`, arrays and other Bashisms, I was left staring at the first
+line of every script, wondering what to do: what is the POSIX sh equivalent of
+`#!/usr/bin/env bash`? I already knew that POSIX says nothing about shebangs,
+and that the portable way to call a POSIX sh script is `sh script.sh`, but
+I didn't know what to do with that first line.
What I had previously was:
-```shell
+
+[source,shell]
+----
#!/usr/bin/env bash
set -Eeuo pipefail
cd "$(dirname "${BASH_SOURCE[0]}")"
-```
+----
-Obviously, the `$BASH_SOURCE` would be gone, and I would have to adapt some of my scripts to not rely on the script location.
-The `-E` and `-o pipefail` options were also gone, and would be replaced by nothing.
+Obviously, the `$BASH_SOURCE` would be gone, and I would have to adapt some of
+my scripts to not rely on the script location. The `-E` and `-o pipefail`
+options were also gone, and would be replaced by nothing.
I converted all of them to:
-```shell
+
+[source,shell]
+----
#!/bin/sh -eu
-```
+----
I moved the `-eu` options to the shebang line itself, striving for conciseness.
-But as I changed callers from `./script.sh` to `sh script.sh`, things started to fail.
-Some tests that should fail reported errors, but didn't return 1.
+But as I changed callers from `./script.sh` to `sh script.sh`, things started to
+fail. Some tests that should fail reported errors, but didn't return 1.
-My first reaction was to revert back to `./script.sh`, but the POSIX bug I caught is a strong strain, and when I went back to it, I figured that the callers were missing some flags.
-Specifically, `sh -eu script.sh`.
+My first reaction was to revert back to `./script.sh`, but the POSIX bug I
+caught is a strong strain, and when I went back to it, I figured that the
+callers were missing some flags. Specifically, `sh -eu script.sh`.
-Then it clicked: when running with `sh script.sh`, the shebang line with the sh options is ignored, as it is a comment!
+Then it clicked: when running with `sh script.sh`, the shebang line with the sh
+options is ignored, as it is a comment!
Which means that the shebang most friendly with POSIX is:
-```shell
+[source,shell]
+----
#!/bin/sh
set -eu
-```
+----
-1. when running via `./script.sh`, if the system has an executable at `/bin/sh`, it will be used to run the script;
-2. when running via `sh script.sh`, the sh options aren't ignored as previously.
+. when running via `./script.sh`, if the system has an executable at `/bin/sh`,
+ it will be used to run the script;
+. when running via `sh script.sh`, the sh options aren't ignored as previously.
TIL.
-
-[posix-awk-0]: {% link _tils/2020-12-15-awk-snippet-shellcheck-all-scripts-in-a-repository.md %}
-[posix-awk-1]: {% link _tils/2021-01-12-awk-snippet-send-email-to-multiple-recipients-with-curl.md %}
diff --git a/src/content/tils/2021/04/24/cl-generic-precedence.adoc b/src/content/tils/2021/04/24/cl-generic-precedence.adoc
index 8051232..541afb0 100644
--- a/src/content/tils/2021/04/24/cl-generic-precedence.adoc
+++ b/src/content/tils/2021/04/24/cl-generic-precedence.adoc
@@ -1,20 +1,10 @@
----
+= Common Lisp argument precedence order parameterization of a generic function
-title: Common Lisp argument precedence order parameterization of a generic function
+When CLOS dispatches a method, it picks the most specific method definition to
+the argument list:
-date: 2021-04-24 2
-
-layout: post
-
-lang: en
-
-ref: common-lisp-argument-precedence-order-parameterization-of-a-generic-function
-
----
-
-When CLOS dispatches a method, it picks the most specific method definition to the argument list:
-
-```lisp
+[source,lisp]
+----
* (defgeneric a-fn (x))
#<STANDARD-GENERIC-FUNCTION A-FN (0) {5815ACB9}>
@@ -39,11 +29,13 @@ When CLOS dispatches a method, it picks the most specific method definition to t
* (a-fn 1)
:NUMBER-1
-```
+----
-CLOS uses a similar logic when choosing the method from parent classes, when multiple ones are available:
+CLOS uses a similar logic when choosing the method from parent classes, when
+multiple ones are available:
-```lisp
+[source,lisp]
+----
* (defclass class-a () ())
#<STANDARD-CLASS CLASS-A {583E0B25}>
@@ -63,11 +55,13 @@ CLOS uses a similar logic when choosing the method from parent classes, when mul
; Compiling Top-Level Form:
#<STANDARD-METHOD ANOTHER-FN (CLASS-B) {584B8895}>
-```
+----
-Given the above definitions, when inheriting from `class-a` and `class-b`, the order of inheritance matters:
+Given the above definitions, when inheriting from `class-a` and `class-b`, the
+order of inheritance matters:
-```lisp
+[source,lisp]
+----
* (defclass class-a-coming-first (class-a class-b) ())
#<STANDARD-CLASS CLASS-A-COMING-FIRST {584BE6AD}>
@@ -79,11 +73,14 @@ Given the above definitions, when inheriting from `class-a` and `class-b`, the o
* (another-fn (make-instance 'class-b-coming-first))
:CLASS-B
-```
+----
-Combining the order of inheritance with generic functions with multiple arguments, CLOS has to make a choice of how to pick a method given two competing definitions, and its default strategy is prioritizing from left to right:
+Combining the order of inheritance with generic functions with multiple
+arguments, CLOS has to make a choice of how to pick a method given two competing
+definitions, and its default strategy is prioritizing from left to right:
-```lisp
+[source,lisp]
+----
* (defgeneric yet-another-fn (obj1 obj2))
#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (0) {584D9EC9}>
@@ -95,43 +92,58 @@ Combining the order of inheritance with generic functions with multiple argument
* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
:FIRST-ARG-SPECIALIZED
-```
+----
-CLOS has to make a choice between the first and the second definition of `yet-another-fn`, but its choice is just a heuristic.
-What if we want the choice to be based on the second argument, instead of the first?
+CLOS has to make a choice between the first and the second definition of
+`yet-another-fn`, but its choice is just a heuristic. What if we want the
+choice to be based on the second argument, instead of the first?
-For that, we use the `:argument-precedence-order` option when declaring a generic function:
+For that, we use the `:argument-precedence-order` option when declaring a
+generic function:
-```lisp
+[source,lisp]
+----
* (defgeneric yet-another-fn (obj1 obj2) (:argument-precedence-order obj2 obj1))
#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (2) {584D9EC9}>
* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
:SECOND-ARG-SPECIALIZED
-```
+----
-I liked that the `:argument-precedence-order` option exists.
-We shouldn't have to change the arguments from `(obj1 obj2)` to `(obj2 obj1)` just to make CLOS pick the method that we want.
-We can configure its default behaviour if desired, and keep the order of arguments however it best fits the generic function.
+I liked that the `:argument-precedence-order` option exists. We shouldn't have
+to change the arguments from `(obj1 obj2)` to `(obj2 obj1)` just to make CLOS
+pick the method that we want. We can configure its default behaviour if
+desired, and keep the order of arguments however it best fits the generic
+function.
-## Comparison with Clojure
+== Comparison with Clojure
Clojure has an equivalent, when using `defmulti`.
-Since when declaring a multi-method with `defmulti` we must define the dispatch function, Clojure uses it to pick the method definition.
-Since the dispatch function is required, there is no need for a default behaviour, such as left-to-right.
+Since when declaring a multi-method with `defmulti` we must define the dispatch
+function, Clojure uses it to pick the method definition. Since the dispatch
+function is required, there is no need for a default behaviour, such as
+left-to-right.
-## Conclusion
+== Conclusion
-Making the argument precedence order configurable for generic functions but not for class definitions makes a lot of sense.
+Making the argument precedence order configurable for generic functions but not
+for class definitions makes a lot of sense.
-When declaring a class, we can choose the precedence order, and that is about it.
-But when defining a generic function, the order of arguments is more important to the function semantics, and the argument precedence being left-to-right is just the default behaviour.
+When declaring a class, we can choose the precedence order, and that is about
+it. But when defining a generic function, the order of arguments is more
+important to the function semantics, and the argument precedence being
+left-to-right is just the default behaviour.
-One shouldn't change the order of arguments of a generic function for the sake of tailoring it to the CLOS priority ranking algorithm, but doing it for a class definition is just fine.
+One shouldn't change the order of arguments of a generic function for the sake
+of tailoring it to the CLOS priority ranking algorithm, but doing it for a class
+definition is just fine.
TIL.
-## References
+== References
+
+:clos-wiki: https://en.wikipedia.org/wiki/Object-Oriented_Programming_in_Common_Lisp
-1. [Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS](https://en.wikipedia.org/wiki/Object-Oriented_Programming_in_Common_Lisp), by Sonja E. Keene
+. {clos-wiki}[Object-Oriented Programming in Common Lisp: A Programmer's Guide
+ to CLOS], by Sonja E. Keene
diff --git a/src/content/tils/2021/04/24/clojure-autocurry.adoc b/src/content/tils/2021/04/24/clojure-autocurry.adoc
index c1e277f..d7dd557 100644
--- a/src/content/tils/2021/04/24/clojure-autocurry.adoc
+++ b/src/content/tils/2021/04/24/clojure-autocurry.adoc
@@ -1,22 +1,13 @@
----
+= Clojure auto curry
-title: Clojure auto curry
+:defcurry-orig: https://lorettahe.github.io/clojure/2016/09/22/clojure-auto-curry
-date: 2021-04-24 1
+Here's a simple macro defined by {defcurry-orig}[Loretta He] to create Clojure
+functions that are curried on all arguments, relying on Clojure's multi-arity
+support:
-updated_at: 2021-04-27
-
-layout: post
-
-lang: en
-
-ref: clojure-auto-curry
-
----
-
-Here's a simple macro defined by [Loretta He][lorettahe] to create Clojure functions that are curried on all arguments, relying on Clojure's multi-arity support:
-
-```clojure
+[source,clojure]
+----
(defmacro defcurry
[name args & body]
(let [partials (map (fn [n]
@@ -25,11 +16,12 @@ Here's a simple macro defined by [Loretta He][lorettahe] to create Clojure funct
`(defn ~name
(~args ~@body)
~@partials)))
-```
+----
A naive `add` definition, alongside its usage and macroexpansion:
-```clojure
+[source,clojure]
+----
user=> (defcurry add
[a b c d e]
(+ 1 2 3 4 5))
@@ -64,22 +56,24 @@ user=> (pprint
([a b c] (clojure.core/partial add a b c))
([a b c d] (clojure.core/partial add a b c d))))
nil
-```
+----
-This simplistic `defcurry` definition doesn't support optional parameters, multi-arity, `&` rest arguments, docstrings, etc., but it could certainly evolve to do so.
+This simplistic `defcurry` definition doesn't support optional parameters,
+multi-arity, `&` rest arguments, docstrings, etc., but it could certainly evolve
+to do so.
-I like how `defcurry` is so short, and abdicates the responsability of doing the multi-arity logic to Clojure's built-in multi-arity support.
-Simple and elegant.
+I like how `defcurry` is so short, and abdicates the responsability of doing the
+multi-arity logic to Clojure's built-in multi-arity support. Simple and
+elegant.
Same Clojure as before, now with auto-currying via macros.
-[lorettahe]: http://lorettahe.github.io/clojure/2016/09/22/clojure-auto-curry
-
-## Comparison with Common Lisp
+== Comparison with Common Lisp
My attempt at writing an equivalent for Common Lisp gives me:
-```lisp
+[source,lisp]
+----
(defun partial (fn &rest args)
(lambda (&rest args2)
(apply fn (append args args2))))
@@ -96,15 +90,17 @@ My attempt at writing an equivalent for Common Lisp gives me:
(let ((func (lambda ,args ,@body)))
(curry-n (- ,(length args) (length rest))
(apply #'partial func rest)))))
-```
+----
-Without built-in multi-arity support, we have to do more work, like tracking the number of arguments consumed so far.
-We also have to write `#'partial` ourselves.
-That is, without dependending on any library, sticking to ANSI Common Lisp.
+Without built-in multi-arity support, we have to do more work, like tracking the
+number of arguments consumed so far. We also have to write `#'partial`
+ourselves. That is, without dependending on any library, sticking to ANSI
+Common Lisp.
The usage is pretty similar:
-```lisp
+[source,lisp]
+----
* (defcurry add (a b c d e)
(+ a b c d e))
ADD
@@ -128,8 +124,10 @@ ADD
(LET ((FUNC (LAMBDA (A B C D E) (+ A B C D E))))
(CURRY-N (- 5 (LENGTH REST)) (APPLY #'PARTIAL FUNC REST))))
T
-```
+----
-This also require `funcall`s, since we return a `lambda` that doesn't live in the function namespace.
+This also require `funcall`s, since we return a `lambda` that doesn't live in
+the function namespace.
-Like the Clojure one, it doesn't support optional parameters, `&rest` rest arguments, docstrings, etc., but it also could evolve to do so.
+Like the Clojure one, it doesn't support optional parameters, `&rest` rest
+arguments, docstrings, etc., but it also could evolve to do so.
diff --git a/src/content/tils/2021/04/24/scm-nif.adoc b/src/content/tils/2021/04/24/scm-nif.adoc
index f53451b..f70e533 100644
--- a/src/content/tils/2021/04/24/scm-nif.adoc
+++ b/src/content/tils/2021/04/24/scm-nif.adoc
@@ -1,35 +1,28 @@
----
+= Three-way conditional for number signs on Lisp
-title: Three-way conditional for number signs on Lisp
+:on-lisp: https://www.paulgraham.com/onlisptext.html
+:sicp: https://mitpress.mit.edu/sites/default/files/sicp/index.html
-date: 2021-04-24 3
+A useful macro from Paul Graham's {on-lisp}[On Lisp] book:
-updated_at: 2021-08-14
-
-layout: post
-
-lang: en
-
-ref: three-way-conditional-for-number-signs-on-lisp
-
----
-
-A useful macro from Paul Graham's [On Lisp][on-lisp] book:
-
-```lisp
+[source,lisp]
+----
(defmacro nif (expr pos zero neg)
(let ((g (gensym)))
`(let ((,g ,expr))
(cond ((plusp ,g) ,pos)
((zerop ,g) ,zero)
(t ,neg)))))
-```
+----
-After I looked at this macro, I started seeing opportunities to using it in many places, and yet I didn't see anyone else using it.
+After I looked at this macro, I started seeing opportunities to using it in many
+places, and yet I didn't see anyone else using it.
-The latest example I can think of is section 1.3.3 of [Structure and Interpretation of Computer Programs][sicp], which I was reading recently:
+The latest example I can think of is section 1.3.3 of {sicp}[Structure and
+Interpretation of Computer Programs], which I was reading recently:
-```scheme
+[source,scheme]
+----
(define (search f neg-point pos-point)
(let ((midpoint (average neg-point pos-point)))
(if (close-enough? neg-point post-point)
@@ -40,11 +33,14 @@ The latest example I can think of is section 1.3.3 of [Structure and Interpretat
((negative? test-value)
(search f midpoint pos-point))
(else midpoint))))))
-```
+----
-Not that the book should introduce such macro this early, but I couldn't avoid feeling bothered by not using the `nif` macro, which could even remove the need for the intermediate `test-value` variable:
+Not that the book should introduce such macro this early, but I couldn't avoid
+feeling bothered by not using the `nif` macro, which could even remove the need
+for the intermediate `test-value` variable:
-```scheme
+[source,scheme]
+----
(define (search f neg-point pos-point)
(let ((midpoint (average neg-point pos-point)))
(if (close-enough? neg-point post-point)
@@ -53,11 +49,10 @@ Not that the book should introduce such macro this early, but I couldn't avoid f
(search f neg-point midpoint)
(midpoint)
(search f midpoint pos-point)))))
-```
-
-It also avoids `cond`'s extra clunky parentheses for grouping, which is unnecessary but built-in.
+----
-As a macro, I personally feel it tilts the balance towards expressivenes despite its extra cognitive load toll.
+It also avoids `cond`'s extra clunky parentheses for grouping, which is
+unnecessary but built-in.
-[on-lisp]: http://www.paulgraham.com/onlisptext.html
-[sicp]: https://mitpress.mit.edu/sites/default/files/sicp/index.html
+As a macro, I personally feel it tilts the balance towards expressivenes despite
+its extra cognitive load toll.
diff --git a/src/content/tils/2021/07/23/git-tls-gpg.adoc b/src/content/tils/2021/07/23/git-tls-gpg.adoc
index fd42c1c..8fe86c6 100644
--- a/src/content/tils/2021/07/23/git-tls-gpg.adoc
+++ b/src/content/tils/2021/07/23/git-tls-gpg.adoc
@@ -1,27 +1,21 @@
----
+= GPG verification of Git repositories without TLS
-title: GPG verification of Git repositories without TLS
-
-date: 2021-07-23
-
-layout: post
-
-lang: en
-
-ref: gpg-verification-of-git-repositories-without-tls
-
----
+:empty:
+:git-protocol: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols#_the_git_protocol
+:remembering: https://euandreh.xyz/remembering/
For online Git repositories that use the [Git Protocol] for serving code, you
can can use GPG to handle authentication, if you have the committer's public
key.
Here's how I'd verify that I've cloned an authentic version of
-[remembering][remembering][^not-available]:
+{remembering}[remembering]footnote:not-available[
+ Funnily enough, not available anymore via the Git Protocol, now only with
+ HTTPS.
+]:
-[^not-available]: Funnily enough, not available anymore via the Git Protocol, now only with HTTPS.
-
-```shell
+[source,shell]
+----
$ wget -qO- https://euandre.org/public.asc | gpg --import -
gpg: clef 81F90EC3CD356060 : « EuAndreh <eu@euandre.org> » n'est pas modifiée
gpg: Quantité totale traitée : 1
@@ -32,7 +26,7 @@ $ git verify-commit HEAD
gpg: Signature faite le dim. 27 juin 2021 16:50:21 -03
gpg: avec la clef RSA 5BDAE9B8B2F6C6BCBB0D6CE581F90EC3CD356060
gpg: Bonne signature de « EuAndreh <eu@euandre.org> » [ultime]
-```
+----
On the first line we import the public key (funnily enough, available via
HTTPS), and after cloning the code via the insecure `git://` protocol, we use
@@ -40,17 +34,12 @@ HTTPS), and after cloning the code via the insecure `git://` protocol, we use
The verification is successful, and we can see that the public key from the
signature matches the fingerprint of the imported one. However
-`git verify-commit` doesn't have an option to check which public key you want
-to verify the commit against. Which means that if a MITM attack happens, the
-attacker could very easily serve a malicious repository with signed commits,
-and you'd have to verify the public key by yourself. That would need to happen
-for subsequent fetches, too.
+`git verify-commit` doesn't have an option to check which public key you want to
+verify the commit against. Which means that if a MITM attack happens, the
+attacker could very easily serve a malicious repository with signed commits, and
+you'd have to verify the public key by yourself. That would need to happen for
+subsequent fetches, too.
Even though this is possible, it is not very convenient, and certainly very
brittle. Despite the fact that the Git Protocol is much faster, it being harder
to make secure is a big downside.
-
-
-
-[Git Protocol]: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols#_the_git_protocol
-[remembering]: https://euandreh.xyz/remembering/
diff --git a/src/content/tils/2021/08/11/js-bigint-reviver.adoc b/src/content/tils/2021/08/11/js-bigint-reviver.adoc
index d71174d..657248a 100644
--- a/src/content/tils/2021/08/11/js-bigint-reviver.adoc
+++ b/src/content/tils/2021/08/11/js-bigint-reviver.adoc
@@ -1,39 +1,26 @@
----
+= Encoding and decoding JavaScript BigInt values with reviver
-title: Encoding and decoding JavaScript BigInt values with reviver
+:reviver-fn: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#using_the_reviver_parameter
+:bigint: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
+:json-rfc: https://datatracker.ietf.org/doc/html/rfc8259
-date: 2021-08-11
-
-updated_at: 2021-08-13
-
-layout: post
-
-lang: en
-
-ref: encoding-and-decoding-javascript-bigint-values-with-reviver
-
----
-
-`JSON.parse()` accepts a second parameter: a [`reviver()` function][reviver].
+`JSON.parse()` accepts a second parameter: a {reviver-fn}[`reviver()` function].
It is a function that can be used to transform the `JSON` values as they're
being parsed.
-[reviver]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#using_the_reviver_parameter
-
-As it turns out, when combined with JavaScript's [`BigInt`] type, you can parse
-and encode JavaScript `BigInt` numbers via JSON:
+As it turns out, when combined with JavaScript's {bigint}[`BigInt`] type, you
+can parse and encode JavaScript `BigInt` numbers via JSON:
-[`BigInt`]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
-
-```javascript
+[source,javascript]
+----
const bigIntReviver = (_, value) =>
- typeof value === "string" && value.match(/^-?[0-9]+n$/)
- ? BigInt(value.slice(0, value.length - 1))
- : value;
-```
+ typeof value === "string" && value.match(/^-?[0-9]+n$/)
+ ? BigInt(value.slice(0, value.length - 1))
+ : value;
+----
-I chose to interpret strings that contains only numbers and an ending `n` suffix
-as `BigInt` values, similar to how JavaScript interprets `123` (a number)
+I chose to interpret strings that contains only numbers and an ending `n`
+suffix as `BigInt` values, similar to how JavaScript interprets `123` (a number)
differently from `123n` (a `bigint`);
We do those checks before constructing the `BigInt` to avoid throwing needless
@@ -42,25 +29,27 @@ become a bottleneck when parsing large JSON values.
In order to do the full roundtrip, we now only need the `toJSON()` counterpart:
-```javascript
+[source,javascript]
+----
BigInt.prototype.toJSON = function() {
- return this.toString() + "n";
+ return this.toString() + "n";
};
-```
+----
With both `bigIntReviver` and `toJSON` defined, we can now successfully parse
and encode JavaScript objects with `BigInt` values transparently:
-```javascript
+[source,javascript]
+----
const s = `[
- null,
- true,
- false,
- -1,
- 3.14,
- "a string",
- { "a-number": "-123" },
- { "a-bigint": "-123n" }
+ null,
+ true,
+ false,
+ -1,
+ 3.14,
+ "a string",
+ { "a-number": "-123" },
+ { "a-bigint": "-123n" }
]`;
const parsed = JSON.parse(s, bigIntReviver);
@@ -71,11 +60,11 @@ console.log(s2);
console.log(typeof parsed[6]["a-number"])
console.log(typeof parsed[7]["a-bigint"])
-```
+----
The output of the above is:
-```
+....
[
null,
true,
@@ -89,12 +78,10 @@ The output of the above is:
[null,true,false,-1,3.14,"a string",{"a-number":"-123"},{"a-bigint":"-123n"}]
string
bigint
-```
+....
If you're on a web browser, you can probably try copying and pasting the above
code on the console right now, as is.
-Even though [`JSON`] doesn't include `BigInt` number, encoding and decoding them
-as strings is quite trivial on JavaScript.
-
-[`JSON`]: https://datatracker.ietf.org/doc/html/rfc8259
+Even though {json-rfc}[`JSON`] doesn't include `BigInt` number, encoding and
+decoding them as strings is quite trivial on JavaScript.