summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2025-03-30 17:34:46 -0300
committerEuAndreh <eu@euandre.org>2025-03-30 17:34:46 -0300
commit7979d77fa8aca002282ad1e4c03e1244aa5d1dd1 (patch)
treee2514b61f76f81379a91092bd76440c37aded1e6
parentsrc/content/security.txt: Derive it (diff)
downloadeuandre.org-7979d77fa8aca002282ad1e4c03e1244aa5d1dd1.tar.gz
euandre.org-7979d77fa8aca002282ad1e4c03e1244aa5d1dd1.tar.xz
src/content/blog/: Upgrade 4 files to asciidoc
-rw-r--r--src/content/blog/2018/12/21/ytdl-subs.adoc387
-rw-r--r--src/content/blog/2019/06/02/nixos-stateless-workstation.adoc206
-rw-r--r--src/content/blog/2020/08/10/guix-srht.adoc87
-rw-r--r--src/content/blog/2020/08/31/database-i-with-i-had.adoc304
4 files changed, 472 insertions, 512 deletions
diff --git a/src/content/blog/2018/12/21/ytdl-subs.adoc b/src/content/blog/2018/12/21/ytdl-subs.adoc
index 183c624..b6540b7 100644
--- a/src/content/blog/2018/12/21/ytdl-subs.adoc
+++ b/src/content/blog/2018/12/21/ytdl-subs.adoc
@@ -1,274 +1,279 @@
----
-title: Using "youtube-dl" to manage YouTube subscriptions
-date: 2018-12-21
-layout: post
-lang: en
-ref: using-youtube-dl-to-manage-youtube-subscriptions
----
-I've recently read the
-[announcement](https://www.reddit.com/r/DataHoarder/comments/9sg8q5/i_built_a_selfhosted_youtube_subscription_manager/)
-of a very nice [self-hosted YouTube subscription
-manager](https://github.com/chibicitiberiu/ytsm). I haven't used
-YouTube's built-in subscriptions for a while now, and haven't missed
-it at all. When I saw the announcement, I considered writing about the
-solution I've built on top of [youtube-dl](https://youtube-dl.org/).
-
-## Background: the problem with YouTube
-
-In many ways, I agree with [André Staltz's view on data ownership and
-privacy](https://staltz.com/what-happens-when-you-block-internet-giants.html):
-
-> I started with the basic premise that "I want to be in control of my
-> data". Sometimes that meant choosing when to interact with an internet
-> giant and how much I feel like revealing to them. Most of times it
-> meant not interacting with them at all. I don't want to let them be in
-> full control of how much they can know about me. I don't want to be in
-> autopilot mode. (...) Which leads us to YouTube. While I was able to
-> find alternatives to Gmail (Fastmail), Calendar (Fastmail), Translate
-> (Yandex Translate), *etc.* YouTube remains as the most indispensable
-> Google-owned web service. It is really really hard to avoid consuming
-> YouTube content. It was probably the smartest startup acquisition
-> ever. My privacy-oriented alternative is to watch YouTube videos
-> through Tor, which is technically feasible but not polite to use the
-> Tor bandwidth for these purposes. I'm still scratching my head with
-> this issue.
-
-Even though I don't use most alternative services he mentions, I do
-watch videos from YouTube. But I also feel uncomfortable logging in to
-YouTube with a Google account, watching videos, creating playlists and
-similar things.
-
-Using the mobile app is worse: you can't even block ads in there.
-You're in less control on what you share with YouTube and Google.
-
-## youtube-dl
-
-youtube-dl is a command-line tool for downloading videos, from YouTube
-and [many other sites](https://rg3.github.io/youtube-dl/supportedsites.html):
-
-```shell
+= Using "youtube-dl" to manage YouTube subscriptions
+
+:ytsm-ann: https://old.reddit.com/r/DataHoarder/comments/9sg8q5/i_built_a_selfhosted_youtube_subscription_manager/
+:ytsm-code: https://github.com/chibicitiberiu/ytsm
+:ytdl: https://youtube-dl.org/
+
+I've recently read the {ytsm-ann}[announcement] of a very nice
+{ytsm-code}[self-hosted YouTube subscription manager]. I haven't used YouTube's
+built-in subscriptions for a while now, and haven't missed it at all. When I
+saw the announcement, I considered writing about the solution I've built on top
+of {ytdl}[youtube-dl].
+
+== Background: the problem with YouTube
+
+:net-giants: https://staltz.com/what-happens-when-you-block-internet-giants.html
+
+In many ways, I agree with {net-giants}[André Staltz's view on data ownership
+and privacy]:
+
+____
+I started with the basic premise that "I want to be in control of my data".
+Sometimes that meant choosing when to interact with an internet giant and how
+much I feel like revealing to them. Most of times it meant not interacting with
+them at all. I don't want to let them be in full control of how much they can
+know about me. I don't want to be in autopilot mode. (...) Which leads us to
+YouTube. While I was able to find alternatives to Gmail (Fastmail), Calendar
+(Fastmail), Translate (Yandex Translate), _etc._ YouTube remains as the most
+indispensable Google-owned web service. It is really really hard to avoid
+consuming YouTube content. It was probably the smartest startup acquisition
+ever. My privacy-oriented alternative is to watch YouTube videos through Tor,
+which is technically feasible but not polite to use the Tor bandwidth for these
+purposes. I'm still scratching my head with this issue.
+____
+
+Even though I don't use most alternative services he mentions, I do watch videos
+from YouTube. But I also feel uncomfortable logging in to YouTube with a Google
+account, watching videos, creating playlists and similar things.
+
+Using the mobile app is worse: you can't even block ads in there. You're in
+less control on what you share with YouTube and Google.
+
+== youtube-dl
+
+:other-sites: https://rg3.github.io/youtube-dl/supportedsites.html
+
+youtube-dl is a command-line tool for downloading videos, from YouTube and
+{other-sites}[many other sites]:
+
+[source,shell]
+----
$ youtube-dl https://www.youtube.com/watch?v=rnMYZnY3uLA
[youtube] rnMYZnY3uLA: Downloading webpage
[youtube] rnMYZnY3uLA: Downloading video info webpage
[download] Destination: A Origem da Vida _ Nerdologia-rnMYZnY3uLA.mp4
[download] 100% of 32.11MiB in 00:12
-```
-
-It can be used to download individual videos as showed above, but it
-also has some interesting flags that we can use:
-
-- `--output`: use a custom template to create the name of the
- downloaded file;
-- `--download-archive`: use a text file for recording and remembering
- which videos were already downloaded;
-- `--prefer-free-formats`: prefer free video formats, like `webm`,
- `ogv` and Matroska `mkv`;
-- `--playlist-end`: how many videos to download from a "playlist" (a
- channel, a user or an actual playlist);
-- `--write-description`: write the video description to a
- `.description` file, useful for accessing links and extra content.
+----
+
+It can be used to download individual videos as showed above, but it also has
+some interesting flags that we can use:
+
+* `--output`: use a custom template to create the name of the downloaded file;
+* `--download-archive`: use a text file for recording and remembering which
+ videos were already downloaded;
+* `--prefer-free-formats`: prefer free video formats, like `webm`, `ogv` and
+ Matroska `mkv`;
+* `--playlist-end`: how many videos to download from a "playlist" (a channel, a
+ user or an actual playlist);
+* `--write-description`: write the video description to a `.description` file,
+ useful for accessing links and extra content.
Putting it all together:
-```shell
+[source,shell]
+----
$ youtube-dl "https://www.youtube.com/channel/UClu474HMt895mVxZdlIHXEA" \
- --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
- --prefer-free-formats \
- --playlist-end 20 \
- --write-description \
- --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
-```
+ --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
+ --prefer-free-formats \
+ --playlist-end 20 \
+ --write-description \
+ --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
+----
-This will download the latest 20 videos from the selected channel, and
-write down the video IDs in the `youtube-dl-seen.conf` file. Running it
-immediately after one more time won't have any effect.
+This will download the latest 20 videos from the selected channel, and write
+down the video IDs in the `youtube-dl-seen.conf` file. Running it immediately
+after one more time won't have any effect.
If the channel posts one more video, running the same command again will
-download only the last video, since the other 19 were already
-downloaded.
+download only the last video, since the other 19 were already downloaded.
-With this basic setup you have a minimal subscription system at work,
-and you can create some functions to help you manage that:
+With this basic setup you have a minimal subscription system at work, and you
+can create some functions to help you manage that:
-```shell
+[source,shell]
+----
#!/bin/sh
export DEFAULT_PLAYLIST_END=15
download() {
- youtube-dl "$1" \
- --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
- --prefer-free-formats \
- --playlist-end $2 \
- --write-description \
- --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
+ youtube-dl "$1" \
+ --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
+ --prefer-free-formats \
+ --playlist-end "$2" \
+ --write-description \
+ --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
}
export -f download
download_user() {
- download "https://www.youtube.com/user/$1" ${2-$DEFAULT_PLAYLIST_END}
+ download "https://www.youtube.com/user/$1" "${2-$DEFAULT_PLAYLIST_END}"
}
export -f download_user
download_channel() {
- download "https://www.youtube.com/channel/$1" ${2-$DEFAULT_PLAYLIST_END}
+ download "https://www.youtube.com/channel/$1" "${2-$DEFAULT_PLAYLIST_END}"
}
export -f download_channel
download_playlist() {
- download "https://www.youtube.com/playlist?list=$1" ${2-$DEFAULT_PLAYLIST_END}
+ download "https://www.youtube.com/playlist?list=$1" "${2-$DEFAULT_PLAYLIST_END}"
}
export -f download_playlist
-```
+----
With these functions, you now can have a subscription fetching script to
download the latest videos from your favorite channels:
-```shell
+[source,shell]
+----
#!/bin/sh
download_user ClojureTV 15
-download_channel "UCmEClzCBDx-vrt0GuSKBd9g" 100
-download_playlist "PLqG7fA3EaMRPzL5jzd83tWcjCUH9ZUsbX" 15
-```
-
-Now, whenever you want to watch the latest videos, just run the above
-script and you'll get all of them in your local machine.
-
-## Tradeoffs
-
-### I've made it for myself, with my use case in mind
+download_channel 'UCmEClzCBDx-vrt0GuSKBd9g' 100
+download_playlist 'PLqG7fA3EaMRPzL5jzd83tWcjCUH9ZUsbX' 15
+----
+
+Now, whenever you want to watch the latest videos, just run the above script
+and you'll get all of them in your local machine.
-1. Offline
+== Tradeoffs
- My internet speed it somewhat reasonable[^internet-speed], but it is really
- unstable. Either at work or at home, it's not uncommon to loose internet
- access for 2 minutes 3~5 times every day, and stay completely offline for a
- couple of hours once every week.
+=== I've made it for myself, with my use case in mind
- Working through the hassle of keeping a playlist on disk has payed
- off many, many times. Sometimes I even not notice when the
- connection drops for some minutes, because I'm watching a video and
- working on some document, all on my local computer.
- There's also no quality adjustment for YouTube's web player, I
- always pick the higher quality and it doesn't change during the
- video. For some types of content, like a podcast with some tiny
- visual resources, this doesn't change much. For other types of
- content, like a keynote presentation with text written on the
- slides, watching on 144p isn't really an option.
+[qanda]
+Offline::
+My internet speed it somewhat
+reasonable{empty}footnote:[
+ Considering how expensive it is and the many ways it could be better, but also
+ how much it has improved over the last years, I say it's reasonable.
+], but it is really unstable. Either at work or at home, it's not uncommon to
+loose internet access for 2 minutes 3~5 times every day, and stay completely
+offline for a couple of hours once every week.
++
+Working through the hassle of keeping a playlist on disk has payed off many,
+many times. Sometimes I even not notice when the connection drops for some
+minutes, because I'm watching a video and working on some document, all on my
+local computer.
++
+There's also no quality adjustment for YouTube's web player, I always pick the
+higher quality and it doesn't change during the video. For some types of
+content, like a podcast with some tiny visual resources, this doesn't change
+much. For other types of content, like a keynote presentation with text written
+on the slides, watching on 144p isn't really an option.
++
+If the internet connection drops during the video download, youtube-dl will
+resume from where it stopped.
++
+This is an offline first benefit that I really like, and works well for me.
- If the internet connection drops during the video download,
- youtube-dl will resume from where it stopped.
- This is an offline first benefit that I really like, and works well
- for me.
+Sync the "seen" file::
+I already have a running instance of Nextcloud, so just dumping the
+`youtube-dl-seen.conf` file inside Nextcloud was a no-brainer.
++
+You could try putting it in a dedicated git repository, and wrap the script with
+an autocommit after every run. If you ever had a merge conflict, you'd simply
+accept all changes and then run the following to tidy up the file:
++
+[source,shell]
+----
+$ uniq youtube-dl-seen.conf > youtube-dl-seen.conf
+----
-2. Sync the "seen" file
- I already have a running instance of Nextcloud, so just dumping the
- `youtube-dl-seen.conf` file inside Nextcloud was a no-brainer.
+Doesn't work on mobile::
+My primary device that I use everyday is my laptop, not my phone. It works well
+for me this way.
++
+Also, it's harder to add ad-blockers to mobile phones, and most mobile software
+still depends on Google's and Apple's blessing.
++
+If you wish, you can sync the videos to the SD card periodically, but that's a
+bit of extra manual work.
- You could try putting it in a dedicated git repository, and wrap the
- script with an autocommit after every run. If you ever had a merge
- conflict, you'd simply accept all changes and then run:
- ```shell
- $ uniq youtube-dl-seen.conf > youtube-dl-seen.conf
- ```
+=== The Good
- to tidy up the file.
-3. Doesn't work on mobile
+[qanda]
+Better privacy::
+We don't even have to configure the ad-blocker to keep ads and trackers away!
++
+YouTube still has your IP address, so using a VPN is always a good idea.
+However, a timing analysis would be able to identify you (considering the
+current implementation).
- My primary device that I use everyday is my laptop, not my phone. It
- works well for me this way.
- Also, it's harder to add ad-blockers to mobile phones, and most
- mobile software still depends on Google's and Apple's blessing.
+No need to self-host::
+There's no host that needs maintenance. Everything runs locally.
++
+As long as you keep youtube-dl itself up to date and sync your "seen" file,
+there's little extra work to do.
- If you wish, you can sync the videos to the SD card periodically,
- but that's a bit of extra manual work.
-### The Good
+Track your subscriptions with git::
+After creating a `subscriptions.sh` executable that downloads all the videos,
+you can add it to git and use it to track metadata about your subscriptions.
-1. Better privacy
- We don't even have to configure the ad-blocker to keep ads and
- trackers away!
+=== The Bad
- YouTube still has your IP address, so using a VPN is always a good
- idea. However, a timing analysis would be able to identify you
- (considering the current implementation).
-2. No need to self-host
+[qanda]
+Maximum playlist size is your disk size::
+This is a good thing for getting a realistic view on your actual "watch later"
+list. However I've run out of disk space many times, and now I need to be more
+aware of how much is left.
- There's no host that needs maintenance. Everything runs locally.
- As long as you keep youtube-dl itself up to date and sync your
- "seen" file, there's little extra work to do.
+=== The Ugly
-3. Track your subscriptions with git
+We can only avoid all the bad parts of YouTube with youtube-dl as long as
+YouTube keeps the videos public and programmatically accessible. If YouTube
+ever blocks that we'd loose the ability to consume content this way, but also
+loose confidence on considering YouTube a healthy repository of videos on the
+internet.
- After creating a `subscriptions.sh` executable that downloads all
- the videos, you can add it to git and use it to track metadata about
- your subscriptions.
-### The Bad
+== Going beyond
-1. Maximum playlist size is your disk size
+Since you're running everything locally, here are some possibilities to be
+explored:
- This is a good thing for getting a realistic view on your actual
- "watch later" list. However I've run out of disk space many
- times, and now I need to be more aware of how much is left.
-### The Ugly
+=== A playlist that is too long for being downloaded all at once
-We can only avoid all the bad parts of YouTube with youtube-dl as long
-as YouTube keeps the videos public and programmatically accessible. If
-YouTube ever blocks that we'd loose the ability to consume content this
-way, but also loose confidence on considering YouTube a healthy
-repository of videos on the internet.
-
-## Going beyond
+You can wrap the `download_playlist` function (let's call the wrapper
+`inc_download`) and instead of passing it a fixed number to the `--playlist-end`
+parameter, you can store the `$n` in a folder (something like
+`$HOME/.yt-db/$PLAYLIST_ID`) and increment it by `$step` every time you run
+`inc_download`.
-Since you're running everything locally, here are some possibilities to
-be explored:
+This way you can incrementally download videos from a huge playlist without
+filling your disk with gigabytes of content all at once.
-### A playlist that is too long for being downloaded all at once
-You can wrap the `download_playlist` function (let's call the wrapper
-`inc_download`) and instead of passing it a fixed number to the
-`--playlist-end` parameter, you can store the `$n` in a folder
-(something like `$HOME/.yt-db/$PLAYLIST_ID`) and increment it by `$step`
-every time you run `inc_download`.
+=== Multiple computer scenario
-This way you can incrementally download videos from a huge playlist
-without filling your disk with gigabytes of content all at once.
+The `download_playlist` function could be aware of the specific machine that it
+is running on and apply specific policies depending on the machine: always
+download everything; only download videos that aren't present anywhere else;
+_etc._
-### Multiple computer scenario
-The `download_playlist` function could be aware of the specific machine
-that it is running on and apply specific policies depending on the
-machine: always download everything; only download videos that aren't
-present anywhere else; *etc.*
+== Conclusion
-## Conclusion
+youtube-dl is a great tool to keep at hand. It covers a really large range of
+video websites and works robustly.
-youtube-dl is a great tool to keep at hand. It covers a really large
-range of video websites and works robustly.
+Feel free to copy and modify this code, and send me suggestions of improvements
+or related content.
-Feel free to copy and modify this code, and
-[send me](mailto:{{ site.author.email }}) suggestions of improvements or related
-content.
-
-## *Edit*
+== _Edit_
2019-05-22: Fix spelling.
-
-[^internet-speed]: Considering how expensive it is and the many ways it could be
- better, but also how much it has improved over the last years, I say it's
- reasonable.
diff --git a/src/content/blog/2019/06/02/nixos-stateless-workstation.adoc b/src/content/blog/2019/06/02/nixos-stateless-workstation.adoc
index c0cfe75..541440b 100644
--- a/src/content/blog/2019/06/02/nixos-stateless-workstation.adoc
+++ b/src/content/blog/2019/06/02/nixos-stateless-workstation.adoc
@@ -1,150 +1,146 @@
= Using NixOS as an stateless workstation
-date: 2019-06-02
-
-layout: post
-
-lang: en
-
-ref: using-nixos-as-an-stateless-workstation
-
----
-
-Last week[^last-week] I changed back to an old[^old-computer] Samsung laptop, and installed
-[NixOS](https://nixos.org/) on it.
-
-After using NixOS on another laptop for around two years, I wanted
-verify how reproducible was my desktop environment, and how far does
-NixOS actually can go on recreating my whole OS from my configuration
-files and personal data. I gravitated towards NixOS after trying (and
-failing) to create an `install.sh` script that would imperatively
-install and configure my whole OS using apt-get. When I found a
-GNU/Linux distribution that was built on top of the idea of
-declaratively specifying the whole OS I was automatically convinced[^convinced-by-declarative-aspect].
-
-I was impressed. Even though I've been experiencing the benefits of Nix
-isolation daily, I always felt skeptical that something would be
-missing, because the devil is always on the details. But the result was
-much better than expected!
+:empty:
+:nixos: https://nixos.org/
+
+Last
+week{empty}footnote:last-week[
+ "Last week" as of the start of this writing, so around the end of May 2019.
+] I changed back to an
+old{empty}footnote:old-computer[
+ I was using a 32GB RAM, i7 and 250GB SSD Samsung laptop. The switch was back
+ to a 8GB RAM, i5 and 500GB HDD Dell laptop. The biggest difference I noticed
+ was on faster memory, both RAM availability and the disk speed, but I had
+ 250GB less local storage space.
+] Samsung laptop, and installed {nixos}[NixOS] on it.
+
+After using NixOS on another laptop for around two years, I wanted verify how
+reproducible was my desktop environment, and how far does NixOS actually can go
+on recreating my whole OS from my configuration files and personal data. I
+gravitated towards NixOS after trying (and failing) to create an `install.sh`
+script that would imperatively install and configure my whole OS using apt-get.
+When I found a GNU/Linux distribution that was built on top of the idea of
+declaratively specifying the whole OS I was automatically
+convinced{empty}footnote:convincend-by-declarative-aspect[
+ The declarative configuration aspect is something that I now completely take
+ for granted, and wouldn't consider using something which isn't declarative. A
+ good metric to show this is me realising that I can't pinpoint the moment when
+ I decided to switch to NixOS. It's like I had a distant past when this wasn't
+ true.
+].
+
+I was impressed. Even though I've been experiencing the benefits of Nix
+isolation daily, I always felt skeptical that something would be missing,
+because the devil is always on the details. But the result was much better than
+expected!
There were only 2 missing configurations:
-1. tap-to-click on the touchpad wasn't enabled by default;
-2. the default theme from the gnome-terminal is "Black on white"
- instead of "White on black".
+. tap-to-click on the touchpad wasn't enabled by default;
+. the default theme from the gnome-terminal is "Black on white" instead of
+ "White on black".
That's all.
-I haven't checked if I can configure those in NixOS GNOME module, but I
-guess both are scriptable and could be set in a fictional `setup.sh`
-run.
+I haven't checked if I can configure those in NixOS GNOME module, but I guess
+both are scriptable and could be set in a fictional `setup.sh` run.
-This makes me really happy, actually. More happy than I anticipated.
+This makes me really happy, actually. More happy than I anticipated.
-Having such a powerful declarative OS makes me feel like my data is the
-really important stuff (as it should be), and I can interact with it on
-any workstation. All I need is an internet connection and a few hours to
-download everything. It feels like my physical workstation and the
-installed OS are serving me and my data, instead of me feeling as
-hostage to the specific OS configuration at the moment. Having a few
-backup copies of everything important extends such peacefulness.
+Having such a powerful declarative OS makes me feel like my data is the really
+important stuff (as it should be), and I can interact with it on any
+workstation. All I need is an internet connection and a few hours to download
+everything. It feels like my physical workstation and the installed OS are
+serving me and my data, instead of me feeling as hostage to the specific OS
+configuration at the moment. Having a few backup copies of everything important
+extends such peacefulness.
After this positive experience with recreating my OS from simple Nix
-expressions, I started to wonder how far I could go with this, and
-started considering other areas of improvements:
+expressions, I started to wonder how far I could go with this, and started
+considering other areas of improvements:
== First run on a fresh NixOS installation
Right now the initial setup relies on non-declarative manual tasks, like
-decrypting some credentials, or manually downloading **this** git
-repository with specific configurations before **that** one.
+decrypting some credentials, or manually downloading *this* git repository with
+specific configurations before *that* one.
-I wonder what some areas of improvements are on this topic, and if
-investing on it is worth it (both time-wise and happiness-wise).
+I wonder what some areas of improvements are on this topic, and if investing on
+it is worth it (both time-wise and happiness-wise).
== Emacs
-Right now I'm using the [Spacemacs](http://spacemacs.org/), which is a
-community package curation and configuration on top of
-[Emacs](https://www.gnu.org/software/emacs/).
+:spacemacs: https://spacemacs.org/
+:emacs: https://www.gnu.org/software/emacs/
+:layers: https://spacemacs.org/doc/LAYERS.html
+:there: https://nixos.org/nixos/manual/index.html#module-services-emacs-adding-packages
+:packages: https://www.gnu.org/software/guix/manual/en/html_node/Application-Setup.html#Emacs-Packages
+
+Right now I'm using the {spacemacs}[Spacemacs], which is a community package
+curation and configuration on top of {emacs}[Emacs].
-Spacemacs does support the notion of
-[layers](http://spacemacs.org/doc/LAYERS.html), which you can
+Spacemacs does support the notion of {layers}[layers], which you can
declaratively specify and let Spacemacs do the rest.
-However this solution isn't nearly as robust as Nix: being purely
-functional, Nix does describe everything required to build a derivation,
-and knows how to do so. Spacemacs it closer to more traditional package
-managers: even though the layers list is declarative, the installation
-is still very much imperative. I've had trouble with Spacemacs not
-behaving the same on different computers, both with identical
-configurations, only brought to convergence back again after a
-`git clean -fdx` inside `~/.emacs.d/`.
+However this solution isn't nearly as robust as Nix: being purely functional,
+Nix does describe everything required to build a derivation, and knows how to do
+so. Spacemacs it closer to more traditional package managers: even though the
+layers list is declarative, the installation is still very much imperative.
+I've had trouble with Spacemacs not behaving the same on different computers,
+both with identical configurations, only brought to convergence back again after
+a `git clean -fdx` inside `~/.emacs.d/`.
+
+The ideal solution would be managing Emacs packages with Nix itself. After a
+quick search I did found that {there}[there is support for Emacs packages in
+Nix]. So far I was only aware of {packages}[Guix support for Emacs packages].
-The ideal solution would be managing Emacs packages with Nix itself.
-After a quick search I did found that [there is support for Emacs
-packages in
-Nix](https://nixos.org/nixos/manual/index.html#module-services-emacs-adding-packages).
-So far I was only aware of [Guix support for Emacs packages](https://www.gnu.org/software/guix/manual/en/html_node/Application-Setup.html#Emacs-Packages).
+This isn't a trivial change because Spacemacs does include extra curation and
+configuration on top of Emacs packages. I'm not sure the best way to improve
+this right now.
-This isn't a trivial change because Spacemacs does include extra
-curation and configuration on top of Emacs packages. I'm not sure the
-best way to improve this right now.
+== myrepos
-### myrepos
+:myrepos: https://myrepos.branchable.com/
-I'm using [myrepos](https://myrepos.branchable.com/) to manage all my
-git repositories, and the general rule I apply is to add any repository
-specific configuration in myrepos' `checkout` phase:
+I'm using {myrepos}[myrepos] to manage all my git repositories, and the general
+rule I apply is to add any repository specific configuration in myrepos'
+`checkout` phase:
-```shell
+[source,shell]
+----
# sample ~/.mrconfig file snippet
[dev/guix/guix]
checkout =
git clone https://git.savannah.gnu.org/git/guix.git guix
cd guix/
git config sendemail.to guix-patches@gnu.org
-```
+----
This way when I clone this repo again the email sending is already
pre-configured.
-This works well enough, but the solution is too imperative, and my
-`checkout` phases tend to become brittle over time if not enough care is
-taken.
-
-### GNU Stow
-
-For my home profile and personal configuration I already have a few
-dozens of symlinks that I manage manually. This has worked so far, but
-the solution is sometimes fragile and [not declarative at all][symlinks]. I
-wonder if something like [GNU Stow][stow] can help me simplify this.
+This works well enough, but the solution is too imperative, and my `checkout`
+phases tend to become brittle over time if not enough care is taken.
-[symlinks]: https://euandre.org/git/dotfiles/tree/bash/symlinks.sh?id=316939aa215181b1d22b69e94241eef757add98d
-[stow]: https://www.gnu.org/software/stow/
+== GNU Stow
-## Conclusion
+:not-at-all: https://euandre.org/git/dotfiles/tree/bash/symlinks.sh?id=316939aa215181b1d22b69e94241eef757add98d
+:stow: https://www.gnu.org/software/stow/
-I'm really satisfied with NixOS, and I intend to keep using it. If what
-I've said interests you, maybe try tinkering with the [Nix package
-manager](https://nixos.org/nix/) (not the whole NixOS) on your current
-distribution (it can live alongside any other package manager).
+For my home profile and personal configuration I already have a few dozens of
+symlinks that I manage manually. This has worked so far, but the solution is
+sometimes fragile and {not-at-all}[not declarative at all]. I wonder if
+something like {stow}[GNU Stow] can help me simplify this.
-If you have experience with declarative Emacs package managements, GNU
-Stow or any similar tool, *etc.*,
-[I'd like some tips](mailto:{{ site.author.email }}). If you don't have any
-experience at all, I'd still love to hear from you.
+== Conclusion
-[^last-week]: "Last week" as of the start of this writing, so around the end of
- May 2019.
+:nix: https://nixos.org/nix/
-[^old-computer]: I was using a 32GB RAM, i7 and 250GB SSD Samsung laptop. The
- switch was back to a 8GB RAM, i5 and 500GB HDD Dell laptop. The biggest
- difference I noticed was on faster memory, both RAM availability and the
- disk speed, but I had 250GB less local storage space.
+I'm really satisfied with NixOS, and I intend to keep using it. If what I've
+said interests you, maybe try tinkering with the {nix}[Nix package manager] (not
+the whole NixOS) on your current distribution (it can live alongside any other
+package manager).
-[^convinced-by-declarative-aspect]: The declarative configuration aspect is
- something that I now completely take for granted, and wouldn't consider
- using something which isn't declarative. A good metric to show this is me
- realising that I can't pinpoint the moment when I decided to switch to
- NixOS. It's like I had a distant past when this wasn't true.
+If you have experience with declarative Emacs package managements, GNU Stow or
+any similar tool, _etc._, mail me some tips]. If you don't have any experience
+at all, I'd still love to hear from you.
diff --git a/src/content/blog/2020/08/10/guix-srht.adoc b/src/content/blog/2020/08/10/guix-srht.adoc
index 4d7e8d5..b450da2 100644
--- a/src/content/blog/2020/08/10/guix-srht.adoc
+++ b/src/content/blog/2020/08/10/guix-srht.adoc
@@ -1,30 +1,24 @@
----
-title: Guix inside sourcehut builds.sr.ht CI
-date: 2020-08-10
-updated_at: 2020-08-19
-layout: post
-lang: en
-ref: guix-inside-sourcehut-builds-sr-ht-ci
----
-After the release of the [NixOS images in builds.sr.ht][0] and much
-usage of it, I also started looking at [Guix][1] and
-wondered if I could get it on the awesome builds.sr.ht service.
-
-[0]: https://man.sr.ht/builds.sr.ht/compatibility.md#nixos
-[1]: https://guix.gnu.org/
-
-The Guix manual section on the [binary installation][2] is very thorough, and
-even a [shell installer script][3] is provided, but it is built towards someone
-installing Guix on their personal computer, and relies heavily on interactive
-input.
-
-[2]: https://guix.gnu.org/manual/en/guix.html#Binary-Installation
-[3]: https://git.savannah.gnu.org/cgit/guix.git/plain/etc/guix-install.sh
+= Guix inside sourcehut builds.sr.ht CI
+
+:nixos: https://man.sr.ht/builds.sr.ht/compatibility.md#nixos
+:guix: https://guix.gnu.org/
+:binary-inst: https://guix.gnu.org/manual/en/guix.html#Binary-Installation
+:shell-inst: https://git.savannah.gnu.org/cgit/guix.git/plain/etc/guix-install.sh
+
+After the release of the {nixos}[NixOS images in builds.sr.ht] and much usage of
+it, I also started looking at {guix}[Guix] and wondered if I could get it on the
+awesome builds.sr.ht service.
+
+The Guix manual section on the {binary-inst}[binary installation] is very
+thorough, and even a {shell-inst}[shell installer script] is provided, but it is
+built towards someone installing Guix on their personal computer, and relies
+heavily on interactive input.
I developed the following set of scripts that I have been using for some time to
-run Guix tasks inside builds.sr.ht jobs. First, `install-guix.sh`:
+run Guix tasks inside builds.sr.ht jobs. First, `install-guix.sh`:
-```shell
+[source,shell]
+----
#!/usr/bin/env bash
set -x
set -Eeuo pipefail
@@ -62,15 +56,18 @@ ln -s /var/guix/profiles/per-user/root/current-guix/bin/guix .
ln -s /var/guix/profiles/per-user/root/current-guix/bin/guix-daemon .
guix archive --authorize < ~root/.config/guix/current/share/guix/ci.guix.gnu.org.pub
-```
+----
-Almost all of it is taken directly from the [binary installation][2] section
-from the manual, with the interactive bits stripped out: after downloading and
-extracting the Guix tarball, we create some symlinks, add guixbuild users and
-authorize the `ci.guix.gnu.org.pub` signing key.
+Almost all of it is taken directly from the {binary-inst}[binary installation]
+section from the manual, with the interactive bits stripped out: after
+downloading and extracting the Guix tarball, we create some symlinks, add
+guixbuild users and authorize the `ci.guix.gnu.org.pub` signing key.
-After installing Guix, we perform a `guix pull` to update Guix inside `start-guix.sh`:
-```shell
+After installing Guix, we perform a `guix pull` to update Guix inside
+`start-guix.sh`:
+
+[source,shell]
+----
#!/usr/bin/env bash
set -x
set -Eeuo pipefail
@@ -79,12 +76,13 @@ sudo guix-daemon --build-users-group=guixbuild &
guix pull
guix package -u
guix --version
-```
+----
Then we can put it all together in a sample `.build.yml` configuration file I'm
using myself:
-```yaml
+[source,yaml]
+----
image: debian/stable
packages:
- wget
@@ -103,26 +101,27 @@ tasks:
- docs: |
cd ./songbooks/
guix environment -m build-aux/guix.scm -- make publish-dist
-```
+----
We have to add the `guix-daemon` to `~/.buildenv` so it can be started on every
-following task run. Also, since we used `wget` inside `install-guix.sh`, we had
+following task run. Also, since we used `wget` inside `install-guix.sh`, we had
to add it to the images package list.
After the `install-guix` task, you can use Guix to build and test your project,
or run any `guix environment --ad-hoc my-package -- my script` :)
-## Improvements
+== Improvements
+
+:repository: https://git.sr.ht/~sircmpwn/builds.sr.ht
When I originally created this code I had a reason why to have both a `sudo`
call for `sudo ./scripts/install-guix.sh` and `sudo` usages inside
-`install-guix.sh` itself. I couldn't figure out why (it feels like my past self
-was a bit smarter 😬), but it feels ugly now. If it is truly required I could
-add an explanation for it, or remove this entirely in favor of a more elegant solution.
+`install-guix.sh` itself. I couldn't figure out why (it feels like my past self
+was a bit smarter 😬), but it feels ugly now. If it is truly required I could
+add an explanation for it, or remove this entirely in favor of a more elegant
+solution.
I could also contribute the Guix image upstream to builds.sr.ht, but there
-wasn't any build or smoke tests in the original [repository][4], so I wasn't
-inclined to make something that just "works on my machine" or add a maintainence
-burden to the author. I didn't look at it again recently, though.
-
-[4]: https://git.sr.ht/~sircmpwn/builds.sr.ht
+wasn't any build or smoke tests in the original {repository}[repository], so I
+wasn't inclined to make something that just ``works on my machine'' or add a
+maintainence burden to the author. I didn't look at it again recently, though.
diff --git a/src/content/blog/2020/08/31/database-i-with-i-had.adoc b/src/content/blog/2020/08/31/database-i-with-i-had.adoc
index 7d127c1..fdcb56c 100644
--- a/src/content/blog/2020/08/31/database-i-with-i-had.adoc
+++ b/src/content/blog/2020/08/31/database-i-with-i-had.adoc
@@ -1,151 +1,111 @@
----
-title: The database I wish I had
-date: 2020-08-31
-updated_at: 2020-09-03
-layout: post
-lang: en
-ref: the-database-i-wish-i-had
-eu_categories: mediator
----
-
-I watched the talk
-"[Platform as a Reflection of Values: Joyent, Node.js and beyond][platform-values]"
-by Bryan Cantrill, and I think he was able to put into words something I already
-felt for some time: if there's no piece of software out there that reflects your
-values, it's time for you to build that software[^talk-time].
-
-[platform-values]: https://vimeo.com/230142234
-[^talk-time]: At the very end, at time 29:49. When talking about the draft of
- this article with a friend, he noted that Bryan O'Sullivan (a different
- Bryan) says a similar thing on his talk
- "[Running a startup on Haskell](https://www.youtube.com/watch?v=ZR3Jirqk6W8)",
- at time 4:15.
-
-I kind of agree with what he said, because this is already happening to me. I
+= The database I wish I had
+
+:empty:
+:values-talk: https://vimeo.com/230142234
+:haskell-startup: https://www.youtube.com/watch?v=ZR3Jirqk6W8
+
+I watched the talk "{values-talk}[Platform as a Reflection of Values: Joyent,
+Node.js and beyond]" by Bryan Cantrill, and I think he was able to put into words something I already felt for some time: if there's no piece of software out there that reflects your values, it's time for you to build that
+software{empty}footnote:talk-time[
+ At the very end, at time 29:49. When talking about the draft of this article
+ with a friend, he noted that Bryan O'Sullivan (a different Bryan) says a
+ similar thing on his talk "{haskell-startup}[Running a startup on Haskell]",
+ at time 4:15.
+].
+
+I kind of agree with what he said, because this is already happening to me. I
long for a database with a certain set of values, and for a few years I was just
-waiting for someone to finally write it. After watching his talk, Bryan is
+waiting for someone to finally write it. After watching his talk, Bryan is
saying to me: "time to stop waiting, and start writing it yourself".
So let me try to give an overview of such database, and go over its values.
-## Overview
+== Overview
I want a database that allows me to create decentralized client-side
applications that can sync data.
The best one-line description I can give right now is:
-> It's sort of like PouchDB, Git, Datomic, SQLite and Mentat.
+____
+It's sort of like PouchDB, Git, Datomic, SQLite and Mentat.
+____
A more descriptive version could be:
-> An embedded, immutable, syncable relational database.
+____
+An embedded, immutable, syncable relational database.
+____
Let's go over what I mean by each of those aspects one by one.
-### Embedded
+=== Embedded
-I think the server-side database landscape is diverse and mature enough for
-my needs (even though I end up choosing SQLite most of the time), and what I'm
+:sqlite: https://sqlite.org/index.html
+:sqlite-whentouse: https://sqlite.org/whentouse.html
+:pouchdb: https://pouchdb.com/
+:couchdb: https://couchdb.apache.org/
+:mentat: https://github.com/mozilla/mentat
+:pouchdb-adapters: https://pouchdb.com/adapters.html
+:datomic-storage-services: https://docs.datomic.com/on-prem/storage.html
+:sqlite-amalgamation: https://www.sqlite.org/amalgamation.html
+
+I think the server-side database landscape is diverse and mature enough for my
+needs (even though I end up choosing SQLite most of the time), and what I'm
after is a database to be embedded on client-side applications itself, be it
-desktop, browser, mobile, *etc.*
+desktop, browser, mobile, _etc._
The purpose of such database is not to keep some local cache of data in case of
-lost connectivity: we have good solutions for that already. It should serve as
+lost connectivity: we have good solutions for that already. It should serve as
the source of truth, and allow the application to work on top of it.
-[**SQLite**][sqlite] is a great example of that: it is a very powerful
-relational database that runs [almost anywhere][sqlite-whentouse]. What I miss
-from it that SQLite doesn't provide is the ability to run it on the browser:
-even though you could compile it to WebAssembly, ~~it assumes a POSIX filesystem
-that would have to be emulated~~[^posix-sqlite].
-
-[sqlite]: https://sqlite.org/index.html
-[sqlite-whentouse]: https://sqlite.org/whentouse.html
-[^posix-sqlite]: It was [pointed out to me](https://news.ycombinator.com/item?id=24338881)
- that SQLite doesn't assume the existence of a POSIX filesystem, as I wrongly
- stated. Thanks for the correction.
-
- This makes me consider it as a storage backend all by itself. I
- initially considered having an SQLite storage backend as one implementation
- of the POSIX filesystem storage API that I mentioned. My goal was to rely on
- it so I could validate the correctness of the actual implementation, given
- SQLite's robustness.
-
- However it may even better to just use SQLite, and get an ACID backend
- without recreating a big part of SQLite from scratch. In fact, both Datomic
- and PouchDB didn't create an storage backend for themselves, they just
- plugged on what already existed and already worked. I'm beginning to think
- that it would be wiser to just do the same, and drop entirely the from
- scratch implementation that I mentioned.
-
- That's not to say that adding an IndexedDB compatibility layer to SQLite
- would be enough to make it fit the other requirements I mention on this
- page. SQLite still is an implementation of a update-in-place, SQL,
- table-oriented database. It is probably true that cherry-picking the
- relevant parts of SQLite (like storage access, consistency, crash recovery,
- parser generator, *etc.*) and leaving out the unwanted parts (SQL, tables,
- threading, *etc.*) would be better than including the full SQLite stack, but
- that's simply an optimization. Both could even coexist, if desired.
-
- SQLite would have to be treated similarly to how Datomic treats SQL
- databases: instead of having a table for each entities, spread attributes
- over the tables, *etc.*, it treats SQL databases as a key-value storage so it
- doesn't have to re-implement interacting with the disk that other databases
- do well.
-
- The tables would contain blocks of binary data, so there isn't a difference
- on how the SQLite storage backend behaves and how the IndexedDB storage
- backend behaves, much like how Datomic works the same regardless of the
- storage backend, same for PouchDB.
-
- I welcome corrections on what I said above, too.
-
-[**PouchDB**][pouchdb] is another great example: it's a full reimplementation of
-[CouchDB][couchdb] that targets JavaScript environments, mainly the browser and
-Node.js. However I want a tool that can be deployed anywhere, and not limit its
+{sqlite}[*SQLite*] is a great example of that: it is a very powerful relational
+database that runs {sqlite-whentouse}[almost anywhere]. What I miss from it
+that SQLite doesn't provide is the ability to run it on the browser: even though
+you could compile it to WebAssembly, [line-through]*it assumes a POSIX
+filesystem that would have to be emulated*[multiblock footnote omitted].
+
+{pouchdb}[*PouchDB*] is another great example: it's a full reimplementation of
+{couchdb}[CouchDB] that targets JavaScript environments, mainly the browser and
+Node.js. However I want a tool that can be deployed anywhere, and not limit its
applications to places that already have a JavaScript runtime environment, or
force the developer to bundle a JavaScript runtime environment with their
-application. This is true for GTK+ applications, command line programs, Android
-apps, *etc.*
-
-[pouchdb]: https://pouchdb.com/
-[couchdb]: https://couchdb.apache.org/
-
-[**Mentat**][mentat] was an interesting project, but its reliance on SQLite
-makes it inherit most of the downsides (and benefits too) of SQLite itself.
+application. This is true for GTK+ applications, command line programs, Android
+apps, _etc._
-[mentat]: https://github.com/mozilla/mentat
+{mentat}[*Mentat*] was an interesting project, but its reliance on SQLite makes
+it inherit most of the downsides (and benefits too) of SQLite itself.
Having such a requirement imposes a different approach to storage: we have to
decouple the knowledge about the intricacies of storage from the usage of
storage itself, so that a module (say query processing) can access storage
-through an API without needing to know about its implementation. This allows
+through an API without needing to know about its implementation. This allows
the database to target a POSIX filesystems storage API and an IndexedDB storage
-API, and make the rest of the code agnostic about storage. PouchDB has such
-mechanism (called [adapters][pouchdb-adapters]) and Datomic has them too (called
-[storage services][datomic-storage-services]).
-
-[pouchdb-adapters]: https://pouchdb.com/adapters.html
-[datomic-storage-services]: https://docs.datomic.com/on-prem/storage.html
+API, and make the rest of the code agnostic about storage. PouchDB has such
+mechanism (called {pouchdb-adapters}[adapters]) and Datomic has them too (called
+{datomic-storage-services}[storage services]).
This would allow the database to adapt to where it is embedded: when targeting
-the browser the IndexedDB storage API would provide the persistence layer
-that the database requires, and similarly the POSIX filesystem storage API would
+the browser the IndexedDB storage API would provide the persistence layer that
+the database requires, and similarly the POSIX filesystem storage API would
provide the persistence layer when targeting POSIX systems (like desktops,
-mobile, *etc.*).
+mobile, _etc._).
But there's also an extra restriction that comes from by being embedded: it
needs to provide and embeddable artifact, most likely a binary library object
-that exposes a C compatible FFI, similar to
-[how SQLite does][sqlite-amalgamation]. Bundling a full runtime environment is
-possible, but doesn't make it a compelling solution for embedding. This rules
-out most languages, and leaves us with C, Rust, Zig, and similar options that
-can target POSIX systems and WebAssembly.
+that exposes a C compatible FFI, similar to {sqlite-amalgamation}[how SQLite
+does]. Bundling a full runtime environment is possible, but doesn't make it a
+compelling solution for embedding. This rules out most languages, and leaves
+us with C, Rust, Zig, and similar options that can target POSIX systems and
+WebAssembly.
-[sqlite-amalgamation]: https://www.sqlite.org/amalgamation.html
+=== Immutable
-### Immutable
+:datomic: https://www.datomic.com/
+:day-of-datomic: https://vimeo.com/116315075
+:git: https://git-scm.com/
+:sqlite-limits: https://sqlite.org/limits.html
+:datomic-no-history: https://docs.datomic.com/cloud/best.html#nohistory-for-high-churn
Being immutable means that only new information is added, no in-place update
ever happens, and nothing is ever deleted.
@@ -154,142 +114,142 @@ Having an immutable database presents us with similar trade-offs found in
persistent data structures, like lack of coordination when doing reads, caches
being always coherent, and more usage of space.
-[**Datomic**][datomic] is the go to database example of this: it will only add
-information (datoms) and allows you to query them in a multitude of ways. Stuart
-Halloway calls it "accumulate-only" over "append-only"[^accumulate-only]:
-
-> It's accumulate-only, it is not append-only. So append-only, most people when
-> they say that they're implying something physical about what happens.
+{datomic}[*Datomic*] is the go to database example of this: it will only add
+information (datoms) and allows you to query them in a multitude of ways.
+Stuart Halloway calls it "accumulate-only" over
+"append-only"{empty}footnote:accumulate-only[
+ Video "{day-of-datomic}[Day of Datomic Part 2]" on Datomic's information
+ model, at time 12:28.
+]:
-[datomic]: https://www.datomic.com/
-[^accumulate-only]: Video "[Day of Datomic Part 2](https://vimeo.com/116315075)"
- on Datomic's information model, at time 12:28.
+____
+It's accumulate-only, it is not append-only. So append-only, most people when
+they say that they're implying something physical about what happens.
+____
Also a database can be append-only and overwrite existing information with new
-information, by doing clean-ups of "stale" data. I prefer to adopt the
+information, by doing clean-ups of "stale" data. I prefer to adopt the
"accumulate-only" naming and approach.
-[**Git**][git] is another example of this: new commits are always added on top
-of the previous data, and it grows by adding commits instead of replacing
-existing ones.
-
-[git]: https://git-scm.com/
+{git}[*Git*] is another example of this: new commits are always added on top of
+the previous data, and it grows by adding commits instead of replacing existing
+ones.
Git repositories can only grow in size, and that is not only an acceptable
condition, but also one of the reasons to use it.
All this means that no in-place updates happens on data, and the database will
be much more concerned about how compact and efficiently it stores data than how
-fast it does writes to disk. Being embedded, the storage limitation is either a)
-how much storage the device has or b) how much storage was designed for the
-application to consume. So even though the database could theoretically operate
+fast it does writes to disk. Being embedded, the storage limitation is either
+a) how much storage the device has or b) how much storage was designed for the
+application to consume. So even though the database could theoretically operate
with hundreds of TBs, a browser page or mobile application wouldn't have access
-to this amount of storage. SQLite even [says][sqlite-limits] that it does
+to this amount of storage. SQLite even {sqlite-limits}[says] that it does
support approximately 280 TBs of data, but those limits are untested.
The upside of keeping everything is that you can have historical views of your
-data, which is very powerful. This also means that applications should turn this
-off when not relevant[^no-history].
+data, which is very powerful. This also means that applications should turn
+this off when not
+relevant{empty}footnote:no-history[
+ Similar to {datomic-no-history}[Datomic's `:db/noHistory`].
+].
-[sqlite-limits]: https://sqlite.org/limits.html
-[^no-history]: Similar to
- [Datomic's `:db/noHistory`](https://docs.datomic.com/cloud/best.html#nohistory-for-high-churn).
+=== Syncable
-### Syncable
+:3-way-merge: https://en.wikipedia.org/wiki/Merge_(version_control)
+:git-remote-gcrypt: https://spwhitton.name/tech/code/git-remote-gcrypt/
-This is a frequent topic when talking about offline-first solutions. When
+This is a frequent topic when talking about offline-first solutions. When
building applications that:
-- can fully work offline,
-- stores data,
-- propagates that data to other application instances,
+* can fully work offline,
+* stores data,
+* propagates that data to other application instances,
then you'll need a conflict resolution strategy to handle all the situations
-where different application instances disagree. Those application instances
+where different application instances disagree. Those application instances
could be a desktop and a browser version of the same application, or the same
mobile app in different devices.
A three-way merge seems to be the best approach, on top of which you could add
application specific conflict resolution functions, like:
-- pick the change with higher timestamp;
-- if one change is a delete, pick it;
-- present the diff on the screen and allow the user to merge them.
+* pick the change with higher timestamp;
+* if one change is a delete, pick it;
+* present the diff on the screen and allow the user to merge them.
Some databases try to make this "easy", by choosing a strategy for you, but I've
found that different applications require different conflict resolution
-strategies. Instead, the database should leave this up to the user to decide,
+strategies. Instead, the database should leave this up to the user to decide,
and provide tools for them to do it.
-[**Three-way merges in version control**][3-way-merge] are the best example,
+{3-way-merge}[*Three-way merges in version control*] are the best example,
performing automatic merges when possible and asking the user to resolve
conflicts when they appear.
-The unit of conflict for a version control system is a line of text. The
+The unit of conflict for a version control system is a line of text. The
database equivalent would probably be a single attribute, not a full entity or a
full row.
Making all the conflict resolution logic be local should allow the database to
-have encrypted remotes similar to how [git-remote-gcrypt][git-remote-gcrypt]
-adds this functionality to Git. This would enable users to sync the application
+have encrypted remotes similar to how {git-remote-gcrypt}[git-remote-gcrypt]
+adds this functionality to Git. This would enable users to sync the application
data across devices using an untrusted intermediary.
-[3-way-merge]: https://en.wikipedia.org/wiki/Merge_(version_control)
-[git-remote-gcrypt]: https://spwhitton.name/tech/code/git-remote-gcrypt/
+=== Relational
-### Relational
+:datomic-datalog: https://docs.datomic.com/on-prem/query.html
+:datomic-model: https://docs.datomic.com/cloud/whatis/data-model.html#datoms
I want the power of relational queries on the client applications.
Most of the arguments against traditional table-oriented relational databases
-are related to write performance, but those don't apply here. The bottlenecks
-for client applications usually aren't write throughput. Nobody is interested in
-differentiating between 1 MB/s or 10 MB/s when you're limited to 500 MB total.
+are related to write performance, but those don't apply here. The bottlenecks
+for client applications usually aren't write throughput. Nobody is interested
+in differentiating between 1 MB/s or 10 MB/s when you're limited to 500 MB
+total.
The relational model of the database could either be based on SQL and tables
-like in SQLite, or maybe [datalog][datalog] and [datoms][datoms] like in
-Datomic.
-
-[datalog]: https://docs.datomic.com/on-prem/query.html
-[datoms]: https://docs.datomic.com/cloud/whatis/data-model.html#datoms
+like in SQLite, or maybe {datomic-datalog}[datalog] and {datomic-model}[datoms]
+like in Datomic.
-## From aspects to values
+== From aspects to values
Now let's try to translate the aspects above into values, as suggested by Bryan
Cantrill.
-### Portability
+=== Portability
Being able to target so many different platforms is a bold goal, and the
embedded nature of the database demands portability to be a core value.
-### Integrity
+=== Integrity
When the local database becomes the source of truth of the application, it must
provide consistency guarantees that enables applications to rely on it.
-### Expressiveness
+=== Expressiveness
The database should empower applications to slice and dice the data in any way
it wants to.
-## Next steps
+== Next steps
Since I can't find any database that fits these requirements, I've finally come
to terms with doing it myself.
It's probably going to take me a few years to do it, and making it portable
-between POSIX and IndexedDB will probably be the biggest challenge. I got myself
-a few books on databases to start.
+between POSIX and IndexedDB will probably be the biggest challenge. I got
+myself a few books on databases to start.
I wonder if I'll ever be able to get this done.
-## External links
+== External links
-See discussions on [Reddit][reddit], [lobsters][lobsters], [HN][hn] and
-[a lengthy email exchange][lengthy-email].
+:reddit: https://old.reddit.com/r/programming/comments/ijwz5b/the_database_i_wish_i_had/
+:lobsters: https://lobste.rs/s/m9vkg4/database_i_wish_i_had
+:hn: https://news.ycombinator.com/item?id=24337244
+:list: https://lists.sr.ht/~euandreh/public-inbox/%3C010101744a592b75-1dce9281-f0b8-4226-9d50-fd2c7901fa72-000000%40us-west-2.amazonses.com%3E
-[reddit]: https://www.reddit.com/r/programming/comments/ijwz5b/the_database_i_wish_i_had/
-[lobsters]: https://lobste.rs/s/m9vkg4/database_i_wish_i_had
-[hn]: https://news.ycombinator.com/item?id=24337244
-[lengthy-email]: https://lists.sr.ht/~euandreh/public-inbox/%3C010101744a592b75-1dce9281-f0b8-4226-9d50-fd2c7901fa72-000000%40us-west-2.amazonses.com%3E
+See discussions on {reddit}[Reddit], {lobsters}[lobsters], {hn}[HN] and {list}[a
+lengthy email exchange].