summaryrefslogtreecommitdiff
path: root/src/content/blog/2018
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2025-04-18 02:17:12 -0300
committerEuAndreh <eu@euandre.org>2025-04-18 02:48:42 -0300
commit020c1e77489b772f854bb3288b9c8d2818a6bf9d (patch)
tree142aec725a52162a446ea7d947cb4347c9d573c9 /src/content/blog/2018
parentMakefile: Remove security.txt.gz (diff)
downloadeuandre.org-020c1e77489b772f854bb3288b9c8d2818a6bf9d.tar.gz
euandre.org-020c1e77489b772f854bb3288b9c8d2818a6bf9d.tar.xz
git mv src/content/* src/content/en/
Diffstat (limited to 'src/content/blog/2018')
-rw-r--r--src/content/blog/2018/07/17/guix-nixos.adoc197
-rw-r--r--src/content/blog/2018/08/01/npm-ci-reproducibility.adoc147
-rw-r--r--src/content/blog/2018/12/21/ytdl-subs.adoc279
3 files changed, 0 insertions, 623 deletions
diff --git a/src/content/blog/2018/07/17/guix-nixos.adoc b/src/content/blog/2018/07/17/guix-nixos.adoc
deleted file mode 100644
index 42290f6..0000000
--- a/src/content/blog/2018/07/17/guix-nixos.adoc
+++ /dev/null
@@ -1,197 +0,0 @@
-= Running Guix on NixOS
-
-:install-step: https://www.gnu.org/software/guix/manual/en/html_node/Binary-Installation.html#Binary-Installation
-
-I wanted to run Guix on a NixOS machine. Even though the Guix manual explains
-how to do it {install-step}[step by step], I needed a few extra ones to make it
-work properly.
-
-I couldn't just install GuixSD because my wireless network card doesn't have any
-free drivers (yet).
-
-== Creating `guixbuilder` users
-
-:manual: https://www.gnu.org/software/guix/manual/en/html_node/Build-Environment-Setup.html#Build-Environment-Setup
-
-Guix requires you to create non-root users that will be used to perform the
-builds in the isolated environments.
-
-The {manual}[manual] already provides you with a ready to run (as root) command
-for creating the build users:
-
-[source,sh]
-----
-groupadd --system guixbuild
-for i in `seq -w 1 10`;
-do
- useradd -g guixbuild -G guixbuild \
- -d /var/empty -s `which nologin` \
- -c "Guix build user $i" --system \
- guixbuilder$i;
-done
-----
-
-:mutable-users: https://nixos.org/nixos/manual/index.html#sec-user-management
-
-However, In my personal NixOS I have disabled
-{mutable-users}[`users.mutableUsers`], which means that even if I run the above
-command it means that they'll be removed once I rebuild my OS:
-
-[source,sh]
-----
-$ sudo nixos-rebuild switch
-(...)
-removing user ‘guixbuilder7’
-removing user ‘guixbuilder3’
-removing user ‘guixbuilder10’
-removing user ‘guixbuilder1’
-removing user ‘guixbuilder6’
-removing user ‘guixbuilder9’
-removing user ‘guixbuilder4’
-removing user ‘guixbuilder2’
-removing user ‘guixbuilder8’
-removing user ‘guixbuilder5’
-(...)
-----
-
-Instead of enabling `users.mutableUsers` I could add the Guix users by adding
-them to my system configuration:
-
-[source,nix]
-----
-{ config, pkgs, ...}:
-
-{
-
- # ... NixOS usual config ellided ...
-
- users = {
- mutableUsers = false;
-
- extraUsers =
- let
- andrehUser = {
- andreh = {
- # my custom user config
- };
- };
- buildUser = (i:
- {
- "guixbuilder${i}" = { # guixbuilder$i
- group = "guixbuild"; # -g guixbuild
- extraGroups = ["guixbuild"]; # -G guixbuild
- home = "/var/empty"; # -d /var/empty
- shell = pkgs.nologin; # -s `which nologin`
- description = "Guix build user ${i}"; # -c "Guix buid user $i"
- isSystemUser = true; # --system
- };
- }
- );
- in
- # merge all users
- pkgs.lib.fold (str: acc: acc // buildUser str)
- andrehUser
- # for i in `seq -w 1 10`
- (map (pkgs.lib.fixedWidthNumber 2) (builtins.genList (n: n+1) 10));
-
- extraGroups.guixbuild = {
- name = "guixbuild";
- };
- };
-}
-----
-
-Here I used `fold` and the `//` operator to merge all of the configuration sets
-into a single `extraUsers` value.
-
-== Creating the `systemd` service
-
-:service-file: https://git.savannah.gnu.org/cgit/guix.git/tree/etc/guix-daemon.service.in?id=00c86a888488b16ce30634d3a3a9d871ed6734a2
-
-One other thing missing was the `systemd` service.
-
-First I couldn't just copy the `.service` file to `/etc` since in NixOS that
-folder isn't writable. But also I wanted the service to be better integrated
-with the OS.
-
-That was a little easier than creating the users, all I had to do was translate
-the provided {service-file}[`guix-daemon.service.in`] configuration to an
-equivalent Nix expression:
-
-[source,ini]
-----
-# This is a "service unit file" for the systemd init system to launch
-# 'guix-daemon'. Drop it in /etc/systemd/system or similar to have
-# 'guix-daemon' automatically started.
-
-[Unit]
-Description=Build daemon for GNU Guix
-
-[Service]
-ExecStart=/var/guix/profiles/per-user/root/guix-profile/bin/guix-daemon --build-users-group=guixbuild
-Environment=GUIX_LOCPATH=/root/.guix-profile/lib/locale
-RemainAfterExit=yes
-StandardOutput=syslog
-StandardError=syslog
-
-# See <https://lists.gnu.org/archive/html/guix-devel/2016-04/msg00608.html>.
-# Some package builds (for example, go@1.8.1) may require even more than
-# 1024 tasks.
-TasksMax=8192
-
-[Install]
-WantedBy=multi-user.target
-----
-
-This sample `systemd` configuration file became:
-
-[source,nix]
-----
-guix-daemon = {
- enable = true;
- description = "Build daemon for GNU Guix";
- serviceConfig = {
- ExecStart = "/var/guix/profiles/per-user/root/guix-profile/bin/guix-daemon --build-users-group=guixbuild";
- Environment="GUIX_LOCPATH=/root/.guix-profile/lib/locale";
- RemainAfterExit="yes";
- StandardOutput="syslog";
- StandardError="syslog";
- TaskMax= "8192";
- };
- wantedBy = [ "multi-user.target" ];
-};
-----
-
-There you go! After running `sudo nixos-rebuild switch` I could get Guix up and
-running:
-
-[source,sh]
-----
-$ guix package -i hello
-The following package will be installed:
- hello 2.10 /gnu/store/bihfrh609gkxb9dp7n96wlpigiv3krfy-hello-2.10
-
-substitute: updating substitutes from 'https://mirror.hydra.gnu.org'... 100.0%
-The following derivations will be built:
- /gnu/store/nznmdn6inpwxnlkrasydmda4s2vsp9hg-profile.drv
- /gnu/store/vibqrvw4c8lacxjrkqyzqsdrmckv77kq-fonts-dir.drv
- /gnu/store/hi8alg7wi0wgfdi3rn8cpp37zhx8ykf3-info-dir.drv
- /gnu/store/cvkbp378cvfjikz7mjymhrimv7j12p0i-ca-certificate-bundle.drv
- /gnu/store/d62fvxymnp95rzahhmhf456bsf0xg1c6-manual-database.drv
-Creating manual page database...
-1 entries processed in 0.0 s
-2 packages in profile
-$ hello
-Hello, world!
-----
-
-:nixos-modules: https://nixos.org/nixos/manual/index.html#sec-writing-modules
-:req: https://www.gnu.org/software/guix/manual/en/html_node/Requirements.html#Requirements
-
-Some improvements to this approach are:
-
-. looking into {nixos-modules}[NixOS modules] and trying to bundle everything
- together into a single logical unit;
-. {req}[build Guix from source] and share the Nix store and daemon with Guix.
-
-Happy Guix/Nix hacking!
diff --git a/src/content/blog/2018/08/01/npm-ci-reproducibility.adoc b/src/content/blog/2018/08/01/npm-ci-reproducibility.adoc
deleted file mode 100644
index 76bd8e6..0000000
--- a/src/content/blog/2018/08/01/npm-ci-reproducibility.adoc
+++ /dev/null
@@ -1,147 +0,0 @@
-= Verifying "npm ci" reproducibility
-:updatedat: 2019-05-22
-
-:empty:
-:npm-5: https://blog.npmjs.org/post/161081169345/v500
-:package-locks-old: https://docs.npmjs.com/files/package-locks
-:package-lock: https://docs.npmjs.com/files/package-lock.json
-:add-npm-ci: https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable
-:cli-docs: https://docs.npmjs.com/cli/install#description
-:tricky-issue: https://github.com/npm/npm/issues/17979#issuecomment-332701215
-
-When {npm-5}[npm@5] came bringing {package-locks-old}[package-locks] with it, I
-was confused about the benefits it provided, since running `npm install` more
-than once could resolve all the dependencies again and yield yet another fresh
-`package-lock.json` file. The message saying "you should add this file to
-version control" left me hesitant on what to
-do{empty}footnote:package-lock-message[
- {cli-docs}[documentation] claims `npm install` is driven by the existing
- `package-lock.json`, but that's actually {tricky-issue}[a little bit tricky].
-].
-
-However the {add-npm-ci}[addition of `npm ci`] filled this gap: it's a stricter
-variation of `npm install` which guarantees that "{package-lock}[subsequent
-installs are able to generate identical trees]". But are they really identical?
-I could see that I didn't have the same problems of different installation
-outputs, but I didn't know for *sure* if it was really identical.
-
-== Computing the hash of a directory's content
-
-:merkle-tree: https://en.wikipedia.org/wiki/Merkle_tree
-
-I quickly searched for a way to check for the hash signature of an entire
-directory tree, but I couldn't find one. I've made a poor man's
-{merkle-tree}[Merkle tree] implementation using `sha256sum` and a few piped
-commands at the terminal:
-
-[source,sh]
-----
-merkle-tree () {
- dirname="${1-.}"
- pushd "$dirname"
- find . -type f |
- sort |
- xargs -I{} sha256sum "{}" |
- sha256sum |
- awk '{print $1}'
- popd
-}
-----
-
-Going through it line by line:
-
-* #1 we define a Bash function called `merkle-tree`;
-* #2 it accepts a single argument: the directory to compute the merkle tree from
- If nothing is given, it runs on the current directory (`.`);
-* #3 we go to the directory, so we don't get different prefixes in `find`'s
- output (like `../a/b`);
-* #4 we get all files from the directory tree. Since we're using `sha256sum` to
- compute the hash of the file contents, we need to filter out folders from it;
-* #5 we need to sort the output, since different file systems and `find`
- implementations may return files in different orders;
-* #6 we use `xargs` to compute the hash of each file individually through
- `sha256sum`. Since a file may contain spaces we need to escape it with
- quotes;
-* #7 we compute the hash of the combined hashes. Since `sha256sum` output is
- formatted like `<hash> <filename>`, it produces a different final hash if a
- file ever changes name without changing it's content;
-* #8 we get the final hash output, excluding the `<filename>` (which is `-` in
- this case, aka `stdin`).
-
-=== Positive points:
-
-. ignore timestamp: running more than once on different installation yields the
- same hash;
-. the name of the file is included in the final hash computation.
-
-=== Limitations:
-
-. it ignores empty folders from the hash computation;
-. the implementation's only goal is to represent using a digest whether the
- content of a given directory is the same or not. Leaf presence checking is
- obviously missing from it.
-
-=== Testing locally with sample data
-
-[source,sh]
-----
-mkdir /tmp/merkle-tree-test/
-cd /tmp/merkle-tree-test/
-mkdir -p a/b/ a/c/ d/
-echo "one" > a/b/one.txt
-echo "two" > a/c/two.txt
-echo "three" > d/three.txt
-merkle-tree . # output is be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3
-merkle-tree . # output still is be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3
-echo "four" > d/four.txt
-merkle-tree . # output is now b5464b958969ed81815641ace96b33f7fd52c20db71a7fccc45a36b3a2ae4d4c
-rm d/four.txt
-merkle-tree . # output back to be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3
-echo "hidden-five" > a/b/one.txt
-merkle-tree . # output changed 471fae0d074947e4955e9ac53e95b56e4bc08d263d89d82003fb58a0ffba66f5
-----
-
-It seems to work for this simple test case.
-
-You can try copying and pasting it to verify the hash signatures.
-
-== Using `merkle-tree` to check the output of `npm ci`
-
-_I've done all of the following using Node.js v8.11.3 and npm@6.1.0_.
-
-In this test case I'll take the main repo of
-https://lernajs.io/[Lerna]footnote:lerna-package-lock[
- Finding a big known repo that actually committed the `package-lock.json` file
- was harder than I expected.
-]:
-
-```bash
-cd /tmp/
-git clone https://github.com/lerna/lerna.git
-cd lerna/
-git checkout 57ff865c0839df75dbe1974971d7310f235e1109
-npm ci
-merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa
-rm -rf node_modules/
-npm ci
-merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa
-npm ci # test if it also works with an existing node_modules/ folder
-merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa
-```
-
-Good job `npm ci` :)
-
-#6 and #9 take some time to run (21 seconds in my machine), but this specific
-use case isn't performance sensitive. The slowest step is computing the hash of
-each individual file.
-
-== Conclusion
-
-`npm ci` really "generates identical trees".
-
-I'm not aware of any other existing solution for verifying the hash signature of
-a directory. If you know any, shoot me an email, as I'd like to know it.
-
-== *Edit*
-
-2019-05-22: Fix spelling.
diff --git a/src/content/blog/2018/12/21/ytdl-subs.adoc b/src/content/blog/2018/12/21/ytdl-subs.adoc
deleted file mode 100644
index 10afbf6..0000000
--- a/src/content/blog/2018/12/21/ytdl-subs.adoc
+++ /dev/null
@@ -1,279 +0,0 @@
-= Using "youtube-dl" to manage YouTube subscriptions
-
-:ytsm-ann: https://old.reddit.com/r/DataHoarder/comments/9sg8q5/i_built_a_selfhosted_youtube_subscription_manager/
-:ytsm-code: https://github.com/chibicitiberiu/ytsm
-:ytdl: https://youtube-dl.org/
-
-I've recently read the {ytsm-ann}[announcement] of a very nice
-{ytsm-code}[self-hosted YouTube subscription manager]. I haven't used YouTube's
-built-in subscriptions for a while now, and haven't missed it at all. When I
-saw the announcement, I considered writing about the solution I've built on top
-of {ytdl}[youtube-dl].
-
-== Background: the problem with YouTube
-
-:net-giants: https://staltz.com/what-happens-when-you-block-internet-giants.html
-
-In many ways, I agree with {net-giants}[André Staltz's view on data ownership
-and privacy]:
-
-____
-I started with the basic premise that "I want to be in control of my data".
-Sometimes that meant choosing when to interact with an internet giant and how
-much I feel like revealing to them. Most of times it meant not interacting with
-them at all. I don't want to let them be in full control of how much they can
-know about me. I don't want to be in autopilot mode. (...) Which leads us to
-YouTube. While I was able to find alternatives to Gmail (Fastmail), Calendar
-(Fastmail), Translate (Yandex Translate), _etc._ YouTube remains as the most
-indispensable Google-owned web service. It is really really hard to avoid
-consuming YouTube content. It was probably the smartest startup acquisition
-ever. My privacy-oriented alternative is to watch YouTube videos through Tor,
-which is technically feasible but not polite to use the Tor bandwidth for these
-purposes. I'm still scratching my head with this issue.
-____
-
-Even though I don't use most alternative services he mentions, I do watch videos
-from YouTube. But I also feel uncomfortable logging in to YouTube with a Google
-account, watching videos, creating playlists and similar things.
-
-Using the mobile app is worse: you can't even block ads in there. You're in
-less control on what you share with YouTube and Google.
-
-== youtube-dl
-
-:other-sites: https://rg3.github.io/youtube-dl/supportedsites.html
-
-youtube-dl is a command-line tool for downloading videos, from YouTube and
-{other-sites}[many other sites]:
-
-[source,sh]
-----
-$ youtube-dl https://www.youtube.com/watch?v=rnMYZnY3uLA
-[youtube] rnMYZnY3uLA: Downloading webpage
-[youtube] rnMYZnY3uLA: Downloading video info webpage
-[download] Destination: A Origem da Vida _ Nerdologia-rnMYZnY3uLA.mp4
-[download] 100% of 32.11MiB in 00:12
-----
-
-It can be used to download individual videos as showed above, but it also has
-some interesting flags that we can use:
-
-* `--output`: use a custom template to create the name of the downloaded file;
-* `--download-archive`: use a text file for recording and remembering which
- videos were already downloaded;
-* `--prefer-free-formats`: prefer free video formats, like `webm`, `ogv` and
- Matroska `mkv`;
-* `--playlist-end`: how many videos to download from a "playlist" (a channel, a
- user or an actual playlist);
-* `--write-description`: write the video description to a `.description` file,
- useful for accessing links and extra content.
-
-Putting it all together:
-
-[source,sh]
-----
-$ youtube-dl "https://www.youtube.com/channel/UClu474HMt895mVxZdlIHXEA" \
- --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
- --prefer-free-formats \
- --playlist-end 20 \
- --write-description \
- --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
-----
-
-This will download the latest 20 videos from the selected channel, and write
-down the video IDs in the `youtube-dl-seen.conf` file. Running it immediately
-after one more time won't have any effect.
-
-If the channel posts one more video, running the same command again will
-download only the last video, since the other 19 were already downloaded.
-
-With this basic setup you have a minimal subscription system at work, and you
-can create some functions to help you manage that:
-
-[source,sh]
-----
-#!/bin/sh
-
-export DEFAULT_PLAYLIST_END=15
-
-download() {
- youtube-dl "$1" \
- --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
- --prefer-free-formats \
- --playlist-end "$2" \
- --write-description \
- --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
-}
-export -f download
-
-
-download_user() {
- download "https://www.youtube.com/user/$1" "${2-$DEFAULT_PLAYLIST_END}"
-}
-export -f download_user
-
-
-download_channel() {
- download "https://www.youtube.com/channel/$1" "${2-$DEFAULT_PLAYLIST_END}"
-}
-export -f download_channel
-
-
-download_playlist() {
- download "https://www.youtube.com/playlist?list=$1" "${2-$DEFAULT_PLAYLIST_END}"
-}
-export -f download_playlist
-----
-
-With these functions, you now can have a subscription fetching script to
-download the latest videos from your favorite channels:
-
-[source,sh]
-----
-#!/bin/sh
-
-download_user ClojureTV 15
-download_channel 'UCmEClzCBDx-vrt0GuSKBd9g' 100
-download_playlist 'PLqG7fA3EaMRPzL5jzd83tWcjCUH9ZUsbX' 15
-----
-
-Now, whenever you want to watch the latest videos, just run the above script
-and you'll get all of them in your local machine.
-
-== Tradeoffs
-
-=== I've made it for myself, with my use case in mind
-
-
-[qanda]
-Offline::
-My internet speed it somewhat
-reasonable{empty}footnote:internet-speed[
- Considering how expensive it is and the many ways it could be better, but also
- how much it has improved over the last years, I say it's reasonable.
-], but it is really unstable. Either at work or at home, it's not uncommon to
-loose internet access for 2 minutes 3~5 times every day, and stay completely
-offline for a couple of hours once every week.
-+
-Working through the hassle of keeping a playlist on disk has payed off many,
-many times. Sometimes I even not notice when the connection drops for some
-minutes, because I'm watching a video and working on some document, all on my
-local computer.
-+
-There's also no quality adjustment for YouTube's web player, I always pick the
-higher quality and it doesn't change during the video. For some types of
-content, like a podcast with some tiny visual resources, this doesn't change
-much. For other types of content, like a keynote presentation with text written
-on the slides, watching on 144p isn't really an option.
-+
-If the internet connection drops during the video download, youtube-dl will
-resume from where it stopped.
-+
-This is an offline first benefit that I really like, and works well for me.
-
-
-Sync the "seen" file::
-I already have a running instance of Nextcloud, so just dumping the
-`youtube-dl-seen.conf` file inside Nextcloud was a no-brainer.
-+
-You could try putting it in a dedicated git repository, and wrap the script with
-an autocommit after every run. If you ever had a merge conflict, you'd simply
-accept all changes and then run the following to tidy up the file:
-+
-[source,sh]
-----
-$ uniq youtube-dl-seen.conf > youtube-dl-seen.conf
-----
-
-
-Doesn't work on mobile::
-My primary device that I use everyday is my laptop, not my phone. It works well
-for me this way.
-+
-Also, it's harder to add ad-blockers to mobile phones, and most mobile software
-still depends on Google's and Apple's blessing.
-+
-If you wish, you can sync the videos to the SD card periodically, but that's a
-bit of extra manual work.
-
-
-=== The Good
-
-
-[qanda]
-Better privacy::
-We don't even have to configure the ad-blocker to keep ads and trackers away!
-+
-YouTube still has your IP address, so using a VPN is always a good idea.
-However, a timing analysis would be able to identify you (considering the
-current implementation).
-
-
-No need to self-host::
-There's no host that needs maintenance. Everything runs locally.
-+
-As long as you keep youtube-dl itself up to date and sync your "seen" file,
-there's little extra work to do.
-
-
-Track your subscriptions with git::
-After creating a `subscriptions.sh` executable that downloads all the videos,
-you can add it to git and use it to track metadata about your subscriptions.
-
-
-=== The Bad
-
-
-[qanda]
-Maximum playlist size is your disk size::
-This is a good thing for getting a realistic view on your actual "watch later"
-list. However I've run out of disk space many times, and now I need to be more
-aware of how much is left.
-
-
-=== The Ugly
-
-We can only avoid all the bad parts of YouTube with youtube-dl as long as
-YouTube keeps the videos public and programmatically accessible. If YouTube
-ever blocks that we'd loose the ability to consume content this way, but also
-loose confidence on considering YouTube a healthy repository of videos on the
-internet.
-
-
-== Going beyond
-
-Since you're running everything locally, here are some possibilities to be
-explored:
-
-
-=== A playlist that is too long for being downloaded all at once
-
-You can wrap the `download_playlist` function (let's call the wrapper
-`inc_download`) and instead of passing it a fixed number to the `--playlist-end`
-parameter, you can store the `$n` in a folder (something like
-`$HOME/.yt-db/$PLAYLIST_ID`) and increment it by `$step` every time you run
-`inc_download`.
-
-This way you can incrementally download videos from a huge playlist without
-filling your disk with gigabytes of content all at once.
-
-
-=== Multiple computer scenario
-
-The `download_playlist` function could be aware of the specific machine that it
-is running on and apply specific policies depending on the machine: always
-download everything; only download videos that aren't present anywhere else;
-_etc._
-
-
-== Conclusion
-
-youtube-dl is a great tool to keep at hand. It covers a really large range of
-video websites and works robustly.
-
-Feel free to copy and modify this code, and send me suggestions of improvements
-or related content.
-
-== _Edit_
-
-2019-05-22: Fix spelling.