diff options
Diffstat (limited to 'site/posts')
4 files changed, 0 insertions, 440 deletions
diff --git a/site/posts/2018-07-17-running-guix-on-nixos.org b/site/posts/2018-07-17-running-guix-on-nixos.org deleted file mode 100644 index d28b8d2..0000000 --- a/site/posts/2018-07-17-running-guix-on-nixos.org +++ /dev/null @@ -1,150 +0,0 @@ ---- -title: Running Guix on NixOS -date: 2018-07-17 ---- -I wanted to run Guix on a NixOS machine. Even though the Guix manual explains how to do it [[https://www.gnu.org/software/guix/manual/en/html_node/Binary-Installation.html#Binary-Installation][step by step]], I needed a few extra ones to make it work properly. - -I couldn't just install GuixSD because my wireless network card doesn't have any free/libre drivers (yet). -** Creating =guixbuilder= users -Guix requires you to create non-root users that will be used to perform the builds in the isolated environments. - -The [[https://www.gnu.org/software/guix/manual/en/html_node/Build-Environment-Setup.html#Build-Environment-Setup][manual]] already provides you with a ready to run (as root) command for creating the build users: -#+BEGIN_SRC bash -n -groupadd --system guixbuild -for i in `seq -w 1 10`; -do - useradd -g guixbuild -G guixbuild \ - -d /var/empty -s `which nologin` \ - -c "Guix build user $i" --system \ - guixbuilder$i; -done -#+END_SRC -However, In my personal NixOS I have disabled [[https://nixos.org/nixos/manual/index.html#sec-user-management][=users.mutableUsers=]], which means that even if I run the above command it means that they'll be removed once I rebuild my OS: -#+BEGIN_SRC shell -n -$ sudo nixos-rebuild switch -(...) -removing user ‘guixbuilder7’ -removing user ‘guixbuilder3’ -removing user ‘guixbuilder10’ -removing user ‘guixbuilder1’ -removing user ‘guixbuilder6’ -removing user ‘guixbuilder9’ -removing user ‘guixbuilder4’ -removing user ‘guixbuilder2’ -removing user ‘guixbuilder8’ -removing user ‘guixbuilder5’ -(...) -#+END_SRC -Instead of enabling =users.mutableUsers= I could add the Guix users by adding them to my system configuration: -#+BEGIN_SRC nix -n -{ config, pkgs, ...}: - -{ - - # ... NixOS usual config ellided ... - - users = { - mutableUsers = false; - - extraUsers = - let - andrehUser = { - andreh = { - # my custom user config - }; - }; - buildUser = (i: - { - "guixbuilder${i}" = { # guixbuilder$i - group = "guixbuild"; # -g guixbuild - extraGroups = ["guixbuild"]; # -G guixbuild - home = "/var/empty"; # -d /var/empty - shell = pkgs.nologin; # -s `which nologin` - description = "Guix build user ${i}"; # -c "Guix buid user $i" - isSystemUser = true; # --system - }; - } - ); - in - # merge all users - pkgs.lib.fold (str: acc: acc // buildUser str) - andrehUser - # for i in `seq -w 1 10` - (map (pkgs.lib.fixedWidthNumber 2) (builtins.genList (n: n+1) 10)); - - extraGroups.guixbuild = { - name = "guixbuild"; - }; - }; -} -#+END_SRC -Here I used =fold= and the =//= operator to merge all of the configuration sets into a single =extraUsers= value. -** Creating the =systemd= service -One other thing missing was the =systemd= service. - -First I couldn't just copy the =.service= file to =/etc= since in NixOS that folder isn't writable. But also I wanted the service to be better integrated with the OS. - -That was a little easier than creating the users, all I had to do was translate the provided [[https://git.savannah.gnu.org/cgit/guix.git/tree/etc/guix-daemon.service.in?id=00c86a888488b16ce30634d3a3a9d871ed6734a2][=guix-daemon.service.in=]] configuration to an equivalent Nix expression -#+BEGIN_SRC ini -n -# This is a "service unit file" for the systemd init system to launch -# 'guix-daemon'. Drop it in /etc/systemd/system or similar to have -# 'guix-daemon' automatically started. - -[Unit] -Description=Build daemon for GNU Guix - -[Service] -ExecStart=/var/guix/profiles/per-user/root/guix-profile/bin/guix-daemon --build-users-group=guixbuild -Environment=GUIX_LOCPATH=/root/.guix-profile/lib/locale -RemainAfterExit=yes -StandardOutput=syslog -StandardError=syslog - -# See <https://lists.gnu.org/archive/html/guix-devel/2016-04/msg00608.html>. -# Some package builds (for example, go@1.8.1) may require even more than -# 1024 tasks. -TasksMax=8192 - -[Install] -WantedBy=multi-user.target -#+END_SRC -This sample =systemd= configuration file became: -#+BEGIN_SRC nix -n -guix-daemon = { - enable = true; - description = "Build daemon for GNU Guix"; - serviceConfig = { - ExecStart = "/var/guix/profiles/per-user/root/guix-profile/bin/guix-daemon --build-users-group=guixbuild"; - Environment="GUIX_LOCPATH=/root/.guix-profile/lib/locale"; - RemainAfterExit="yes"; - StandardOutput="syslog"; - StandardError="syslog"; - TaskMax= "8192"; - }; - wantedBy = [ "multi-user.target" ]; -}; -#+END_SRC -There you go! After running =sudo nixos-rebuild switch= I could get Guix up and running: -#+BEGIN_SRC bash -n -$ guix package -i hello -The following package will be installed: - hello 2.10 /gnu/store/bihfrh609gkxb9dp7n96wlpigiv3krfy-hello-2.10 - -substitute: updating substitutes from 'https://mirror.hydra.gnu.org'... 100.0% -The following derivations will be built: - /gnu/store/nznmdn6inpwxnlkrasydmda4s2vsp9hg-profile.drv - /gnu/store/vibqrvw4c8lacxjrkqyzqsdrmckv77kq-fonts-dir.drv - /gnu/store/hi8alg7wi0wgfdi3rn8cpp37zhx8ykf3-info-dir.drv - /gnu/store/cvkbp378cvfjikz7mjymhrimv7j12p0i-ca-certificate-bundle.drv - /gnu/store/d62fvxymnp95rzahhmhf456bsf0xg1c6-manual-database.drv -Creating manual page database... -1 entries processed in 0.0 s -2 packages in profile -$ hello -Hello, world! -#+END_SRC -Some improvements to this approach are: -1. looking into [[https://nixos.org/nixos/manual/index.html#sec-writing-modules][NixOS modules]] and trying to bundle everything together into a single logical unit; -2. [[https://www.gnu.org/software/guix/manual/en/html_node/Requirements.html#Requirements][build Guix from source]] and share the Nix store and daemon with Guix. - -Happy Guix/Nix hacking! diff --git a/site/posts/2018-08-01-verifying-npm-ci-reproducibility.org b/site/posts/2018-08-01-verifying-npm-ci-reproducibility.org deleted file mode 100644 index 7d19632..0000000 --- a/site/posts/2018-08-01-verifying-npm-ci-reproducibility.org +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: Verifying <code>npm ci</code> reproducibility -date: 2018-08-01 ---- -When [[https://blog.npmjs.org/post/161081169345/v500][npm@5]] came bringing [[https://docs.npmjs.com/files/package-locks][package-locks]] with it, I was confused about the benefits it provided, since running =npm install= more than once could resolve all the dependencies again and yield yet another fresh =package-lock.json= file. The message saying "you should add this file to version control" left me hesitant on what to do[fn:npm-install]. - -However the [[https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable][addition of =npm ci=]] filled this gap: it's a stricter variation of =npm install= which guarantees that "[[https://docs.npmjs.com/files/package-lock.json][subsequent installs are able to generate identical trees]]". But are they really identical? I could see that I didn't have the same problems of different installation outputs, but I didn't know for *sure* if it was really identical. -** Computing the hash of a directory's content -I quickly searched for a way to check for the hash signature of an entire directory tree, but I couldn't find one. I've made a poor man's [[https://en.wikipedia.org/wiki/Merkle_tree][Merkle tree]] implementation using =sha256sum= and a few piped commands at the terminal: -#+BEGIN_SRC bash -n - merkle-tree () { - dirname="${1-.}" - pushd "$dirname" - find . -type f | \ - sort | \ - xargs -I{} sha256sum "{}" | \ - sha256sum | \ - awk '{print $1}' - popd - } -#+END_SRC -Going through it line by line: -- #1 we define a Bash function called =merkle-tree=; -- #2 it accepts a single argument: the directory to compute the merkle tree from. If nothing is given, it runs on the current directory (=.=); -- #3 we go to the directory, so we don't get different prefixes in =find='s output (like =../a/b=); -- #4 we get all files from the directory tree. Since we're using =sha256sum= to compute the hash of the file contents, we need to filter out folders from it; -- #5 we need to sort the output, since different file systems and =find= implementations may return files in different orders; -- #6 we use =xargs= to compute the hash of each file individually through =sha256sum=. Since a file may contain spaces we need to escape it with quotes; -- #7 we compute the hash of the combined hashes. Since =sha256sum= output is formatted like =<hash> <filename>=, it produces a different final hash if a file ever changes name without changing it's content; -- #8 we get the final hash output, excluding the =<filename>= (which is =-= in this case, aka =stdin=). -*** Positive points: -1. ignore timestamp: running more than once on different installation yields the same hash; -2. the name of the file is included in the final hash computation. -*** Limitations: -1. it ignores empty folders from the hash computation; -2. the implementation's only goal is to represent using a digest whether the content of a given directory is the same or not. Leaf presence checking is obviously missing from it. -*** Testing locally with sample data -#+BEGIN_SRC bash -n - mkdir /tmp/merkle-tree-test/ - cd /tmp/merkle-tree-test/ - mkdir -p a/b/ a/c/ d/ - echo "one" > a/b/one.txt - echo "two" > a/c/two.txt - echo "three" > d/three.txt - merkle-tree . # output is be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3 - merkle-tree . # output still is be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3 - echo "four" > d/four.txt - merkle-tree . # output is now b5464b958969ed81815641ace96b33f7fd52c20db71a7fccc45a36b3a2ae4d4c - rm d/four.txt - merkle-tree . # output back to be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3 - echo "hidden-five" > a/b/one.txt - merkle-tree . # output changed 471fae0d074947e4955e9ac53e95b56e4bc08d263d89d82003fb58a0ffba66f5 -#+END_SRC -It seems to work for this simple test case. - -You can try copying and pasting it to verify the hash signatures. -** Using =merkle-tree= to check the output of =npm ci= -/I've done all of the following using Node.js v8.11.3 and npm@6.1.0./ - -In this test case I'll take the main repo of [[https://lernajs.io/][Lerna]][fn:js-repos]: -#+BEGIN_SRC bash -n - cd /tmp/ - git clone https://github.com/lerna/lerna.git - cd lerna/ - git checkout 57ff865c0839df75dbe1974971d7310f235e1109 - npm ci - merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa - rm -rf node_modules/ - npm ci - merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa - npm ci # test if it also works with an existing node_modules/ folder - merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa -#+END_SRC -Good job =npm ci= :) - -#6 and #9 take some time to run (21 seconds in my machine), but this specific use case isn't performance sensitive. The slowest step is computing the hash of each individual file. -** Conclusion -=npm ci= really "generates identical trees". - -I'm not aware of any other existing solution for verifying the hash signature of a directory. If you know any I'd [[mailto:eu@euandre.org][like to know]]. -** /Edit/ -2019/05/22: Fix spelling. - -[fn:npm-install] The [[https://docs.npmjs.com/cli/install#description][documentation]] claims =npm install= is driven by the existing =package-lock.json=, but that' actually [[https://github.com/npm/npm/issues/17979#issuecomment-332701215][a little bit tricky]]. -[fn:js-repos] Finding a big known repo that actually committed the =package-lock.json= file was harder than I expected. diff --git a/site/posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.org b/site/posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.org deleted file mode 100644 index b4f4e81..0000000 --- a/site/posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.org +++ /dev/null @@ -1,145 +0,0 @@ ---- -title: Using <code>youtube-dl</code> to manage YouTube subscriptions -date: 2018-12-21 ---- -I've recently read the [[https://www.reddit.com/r/DataHoarder/comments/9sg8q5/i_built_a_selfhosted_youtube_subscription_manager/][announcement]] of a very nice [[https://github.com/chibicitiberiu/ytsm][self-hosted YouTube subscription manager]]. I haven't used YouTube's built-in subscriptions for a while now, and haven't missed it at all. When I saw the announcement, I considered writing about the solution I've built on top of [[https://youtube-dl.org/][youtube-dl]]. -** Background: the problem with YouTube -In many ways, I agree with [[https://staltz.com/what-happens-when-you-block-internet-giants.html][André Staltz's view on data ownership and privacy]]: -#+BEGIN_QUOTE -I started with the basic premise that “I want to be in control of my data”. Sometimes that meant choosing when to interact with an internet giant and how much I feel like revealing to them. Most of times it meant not interacting with them at all. I don’t want to let them be in full control of how much they can know about me. I don’t want to be in autopilot mode. -(...) -Which leads us to YouTube. While I was able to find alternatives to Gmail (Fastmail), Calendar (Fastmail), Translate (Yandex Translate), etc, YouTube remains as the most indispensable Google-owned web service. It is really really hard to avoid consuming YouTube content. It was probably the smartest startup acquisition ever. My privacy-oriented alternative is to watch YouTube videos through Tor, which is technically feasible but not polite to use the Tor bandwidth for these purposes. I’m still scratching my head with this issue. -#+END_QUOTE -Even though I don't use most alternative services he mentions, I do watch videos from YouTube. But I also feel uncomfortable logging in to YouTube with a Google account, watching videos, creating playlists and similar things. - -Using the mobile app is worse: you can't even block ads in there. You're in less control on what you share with YouTube and Google. -** youtube-dl -youtube-dl is a command-line tool for downloading videos, from YouTube and [[https://rg3.github.io/youtube-dl/supportedsites.html][many other sites]]: -#+BEGIN_SRC shell -$ youtube-dl https://www.youtube.com/watch?v=rnMYZnY3uLA -[youtube] rnMYZnY3uLA: Downloading webpage -[youtube] rnMYZnY3uLA: Downloading video info webpage -[download] Destination: A Origem da Vida _ Nerdologia-rnMYZnY3uLA.mp4 -[download] 100% of 32.11MiB in 00:12 -#+END_SRC -It can be used to download individual videos as showed above, but it also has some interesting flags that we can use: -- =--output=: use a custom template to create the name of the downloaded file; -- =--download-archive=: use a text file for recording and remembering which videos were already downloaded; -- =--prefer-free-formats=: prefer free video formats, like =webm=, =ogv= and Matroska =mkv=; -- =--playlist-end=: how many videos to download from a "playlist" (a channel, a user or an actual playlist); -- =--write-description=: write the video description to a =.description= file, useful for accessing links and extra content. - -Putting it all together: -#+BEGIN_SRC shell -$ youtube-dl "https://www.youtube.com/channel/UClu474HMt895mVxZdlIHXEA" \ - --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \ - --prefer-free-formats \ - --playlist-end 20 \ - --write-description \ - --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s" -#+END_SRC -This will download the latest 20 videos from the selected channel, and write down the video IDs in the =youtube-dl-seen.conf= file. Running it immediately after one more time won't have any effect. - -If the channel posts one more video, running the same command again will download only the last video, since the other 19 were already downloaded. - -With this basic setup you have a minimal subscription system at work, and you can create some functions to help you manage that: -#+BEGIN_SRC shell -#!/bin/sh - -export DEFAULT_PLAYLIST_END=15 - -download() { - youtube-dl "$1" \ - --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \ - --prefer-free-formats \ - --playlist-end $2 \ - --write-description \ - --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s" -} -export -f download - - -download_user() { - download "https://www.youtube.com/user/$1" ${2-$DEFAULT_PLAYLIST_END} -} -export -f download_user - - -download_channel() { - download "https://www.youtube.com/channel/$1" ${2-$DEFAULT_PLAYLIST_END} -} -export -f download_channel - - -download_playlist() { - download "https://www.youtube.com/playlist?list=$1" ${2-$DEFAULT_PLAYLIST_END} -} -export -f download_playlist -#+END_SRC -With these functions, you now can have a subscription fetching script to download the latest videos from your favorite channels: -#+BEGIN_SRC shell -#!/bin/sh - -download_user ClojureTV 15 -download_channel "UCmEClzCBDx-vrt0GuSKBd9g" 100 -download_playlist "PLqG7fA3EaMRPzL5jzd83tWcjCUH9ZUsbX" 15 -#+END_SRC -Now, whenever you want to watch the latest videos, just run the above script and you'll get all of them in your local machine. -** Tradeoffs -*** I've made it for myself, with my use case in mind -**** Offline -My internet speed it somewhat reasonable[fn:reasonable-internet], but it is really unstable. Either at work or at home, it's not uncommon to loose internet access for 2 minutes 3~5 times every day, and stay completely offline for a couple of hours once every week. - -Working through the hassle of keeping a playlist on disk has payed off many, many times. Sometimes I even not notice when the connection drops for some minutes, because I'm watching a video and working on some document, all on my local computer. - -There's also no quality adjustment for YouTube's web player, I always pick the higher quality and it doesn't change during the video. For some types of content, like a podcast with some tiny visual resources, this doesn't change much. For other types of content, like a keynote presentation with text written on the slides, watching on 144p isn't really an option. - -If the internet connection drops during the video download, youtube-dl will resume from where it stopped. - -This is an offline first benefit that I really like, and works well for me. -**** Sync the "seen" file -I already have a running instance of Nextcloud, so just dumping the =youtube-dl-seen.conf= file inside Nextcloud was a no-brainer. - -You could try putting it in a dedicated git repository, and wrap the script with an autocommit after every run. If you ever had a merge conflict, you'd simply accept all changes and then run: -#+BEGIN_SRC shell -$ uniq youtube-dl-seen.conf > youtube-dl-seen.conf -#+END_SRC -to tidy up the file. -**** Doesn't work on mobile -My primary device that I use everyday is my laptop, not my phone. It works well for me this way. - -Also, it's harder to add ad-blockers to mobile phones, and most mobile software still depends on Google's and Apple's blessing. - -If you wish, you can sync the videos to the SD card periodically, but that's a bit of extra manual work. -*** The Good -**** Better privacy -We don't even have to configure the ad-blocker to keep ads and trackers away! - -YouTube still has your IP address, so using a VPN is always a good idea. However, a timing analysis would be able to identify you (considering the current implementation). -**** No need to self-host -There's no host that needs maintenance. Everything runs locally. - -As long as you keep youtube-dl itself up to date and sync your "seen" file, there's little extra work to do. -**** Track your subscriptions with git -After creating a =subscriptions.sh= executable that downloads all the videos, you can add it to git and use it to track metadata about your subscriptions. -*** The Bad -**** Maximum playlist size is your disk size -This is a good thing for getting a realistic view on your actual "watch later" list. However I've run out of disk space many times, and now I need to be more aware of how much is left. -*** The Ugly -We can only avoid all the bad parts of YouTube with youtube-dl as long as YouTube keeps the videos public and programmatically accessible. If YouTube ever blocks that we'd loose the ability to consume content this way, but also loose confidence on considering YouTube a healthy repository of videos on the internet. -** Going beyond -Since you're running everything locally, here are some possibilities to be explored: -*** A playlist that is too long for being downloaded all at once -You can wrap the =download_playlist= function (let's call the wrapper =inc_download=) and instead of passing it a fixed number to the =--playlist-end= parameter, you can store the =$n= in a folder (something like =$HOME/.yt-db/$PLAYLIST_ID=) and increment it by =$step= every time you run =inc_download=. - -This way you can incrementally download videos from a huge playlist without filling your disk with gigabytes of content all at once. -*** Multiple computer scenario -The =download_playlist= function could be aware of the specific machine that it is running on and apply specific policies depending on the machine: always download everything; only download videos that aren't present anywhere else; etc. -** Conclusion -youtube-dl is a great tool to keep at hand. It covers a really large range of video websites and works robustly. - -Feel free to copy and modify this code, and [[mailto:eu@euandre.org][send me]] suggestions of improvements or related content. -** /Edit/ -2019/05/22: Fix spelling. - -[fn:reasonable-internet] Considering how expensive it is and the many ways it could be better, but also how much it has improved over the last years, I say it's reasonable. diff --git a/site/posts/2019-06-02-stateless-os.org b/site/posts/2019-06-02-stateless-os.org deleted file mode 100644 index 84659d8..0000000 --- a/site/posts/2019-06-02-stateless-os.org +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Using NixOS as an stateless workstation -date: 2019-06-02 ---- -Last week[fn:last-week] I changed back to an old[fn:old-laptop] Samsung laptop, and installed [[https://nixos.org/][NixOS]] on it. - -After using NixOS on another laptop for around two years, I wanted verify how reproducible was my desktop environment, and how far does NixOS actually can go on recreating my whole OS from my configuration files and personal data. I gravitated towards NixOS after trying (and failing) to create an =install.sh= script that would imperatively install and configure my whole OS using apt-get. When I found a GNU/Linux distribution that was built on top of the idea of declaratively specifying the whole OS I was automatically convinced[fn:convinced]. - -I was impressed. Even though I've been experiencing the benefits of Nix isolation daily, I always felt skeptical that something would be missing, because the devil is always on the details. But the result was much better than expected! - -There were only 2 missing configurations: -1. tap-to-click on the touchpad wasn't enabled by default; -2. the default theme from the gnome-terminal is "Black on white" instead of "White on black". - -That's all. - -I haven't checked if I can configure those in NixOS GNOME module, but I guess both are scriptable and could be set in a fictional =setup.sh= run. - -This makes me really happy, actually. More happy than I anticipated. - -Having such a powerful declarative OS makes me feel like my data is the really important stuff (as it should be), and I can interact with it on any workstation. All I need is an internet connection and a few hours to download everything. It feels like my physical workstation and the installed OS are serving me and my data, instead of me feeling as hostage to the specific OS configuration at the moment. Having a few backup copies of everything important extends such peacefulness. - -After this positive experience with recreating my OS from simple Nix expressions, I started to wonder how far I could go with this, and started considering other areas of improvements: -*** First run on a fresh NixOS installation -Right now the initial setup relies on non-declarative manual tasks, like decrypting some credentials, or manually downloading *this* git repository with specific configurations before *that* one. - -I wonder what some areas of improvements are on this topic, and if investing on it is worth it (both time-wise and happiness-wise). -*** Emacs -Right now I'm using the [[http://spacemacs.org/][Spacemacs]], which is a community package curation and configuration on top of [[https://www.gnu.org/software/emacs/][Emacs]]. - -Spacemacs does support the notion of [[http://spacemacs.org/doc/LAYERS.html][layers]], which you can declaratively specify and let Spacemacs do the rest. - -However this solution isn't nearly as robust as Nix: being purely functional, Nix does describe everything required to build a derivation, and knows how to do so. Spacemacs it closer to more traditional package managers: even though the layers list is declarative, the installation is still very much imperative. I've had trouble with Spacemacs not behaving the same on different computers, both with identical configurations, only brought to convergence back again after a =git clean -fdx= inside =~/.emacs.d/=. - -The ideal solution would be managing Emacs packages with Nix itself. After a quick search I did found that [[https://nixos.org/nixos/manual/index.html#module-services-emacs-adding-packages][there is support for Emacs packages in Nix]]. So far I was only aware of [[https://www.gnu.org/software/guix/manual/en/html_node/Application-Setup.html#Emacs-Packages][Guix support for Emacs packages]]. - -This isn't a trivial change because Spacemacs does include extra curation and configuration on top of Emacs packages. I'm not sure the best way to improve this right now. -*** myrepos -I'm using [[https://myrepos.branchable.com/][myrepos]] to manage all my git repositories, and the general rule I apply is to add any repository specific configuration in myrepos' =checkout= phase: -#+BEGIN_SRC shell -# sample ~/.mrconfig file snippet -[dev/guix/guix] -checkout = - git clone https://git.savannah.gnu.org/git/guix.git guix - cd guix/ - git config sendemail.to guix-patches@gnu.org -#+END_SRC -This way when I clone this repo again the email sending is already pre-configured. - -This works well enough, but the solution is too imperative, and my =checkout= phases tend to become brittle over time if not enough care is taken. -*** GNU Stow -For my home profile and personal configuration I already have a few dozens of symlinks that I manage manually. This has worked so far, but the solution is sometimes fragile and [[https://git.sr.ht/~euandreh/dotfiles/tree/316939aa215181b1d22b69e94241eef757add98d/bash/symlinks.sh#L14-75][not declarative at all]]. I wonder if something like [[https://www.gnu.org/software/stow/][GNU Stow]] can help me simplify this. -** Conclusion -I'm really satisfied with NixOS, and I intend to keep using it. If what I've said interests you, maybe try tinkering with the [[https://nixos.org/nix/][Nix package manager]] (not the whole NixOS) on your current distribution (it can live alongside any other package manager). - -If you have experience with declarative Emacs package managements, GNU Stow or any similar tool, etc., [[mailto:eu@euandre.org][I'd like some tips]]. If you don't have any experience at all, [[mailto:eu@euandre.org][I'd still love to hear from you]]. - -[fn:last-week] "Last week" as of the start of this writing, so around the end of May 2019. -[fn:old-laptop] I was using a 32GB RAM, i7 and 250GB SSD Samsung laptop. The switch was back to a 8GB RAM, i5 and 500GB HDD Dell laptop. The biggest difference I noticed was on faster memory, both RAM availability and the disk speed, but I had 250GB less local storage space. -[fn:convinced] The declarative configuration aspect is something that I now completely take for granted, and wouldn't consider using something which isn't declarative. A good metric to show this is me realising that I can't pinpoint the moment when I decided to switch to NixOS. It's like I had a distant past when this wasn't true. |