We have some part of it in PRs…
Yep, but they are a proprietary GitHub platform, not a Git feature,
so what if tomorrow “Nix” have to abandon GitHub? I do not talk about
migrating history but choosing a new tool to keep workflow up and
running for all devs and users.
I meant git repo, I sent a correction afterwards.
I want a separate resizable FS for /tmp that is wiped on boot. mkfs is
the fastest way. I don’t want to have to relearn everything to gain
things I didn’t ask for.
I don’t want snapshots, because I want to put things into VCS.
VCS are nice for preserving history, but not really nice with binary
contents (from video/image/music to pdfs docs etc) and not much
performing and efficient for backups. Personally I choose nilfs2
because being a logfs it create a “zero overhead snapshots” (named
checkpoints) at any fs write() so I have and effective protection
against accidental deletion/overwrite having the most up-to-date
Some people just use chattr +i (personally I have workflow that
minimizes risks of accidental overwrites, but +i is indeed sometimes
Of course, this implies division into large and immutable and small and
mutable (which go into VCS). Large and mutable needs special care
anyway, and special tools like the ones Postgres needs…
nilfs2 seems to have some other nice properties, though, but I never
got around to trying it.
as zfs, I can enlarge and shrink live volumes issueless but on top
of LVM and LVM itself is not much comfortable and quick as zfs.
Dunno, LVM works and I do want to separate failure domains for
different FSes with different usage patterns.
for personal play/exploration… It’s not a matter of re-learning
it’s a matter of storage needs, for many classic swap, root, home
plain partition even without separate boot are ok, as long as they
(I have 9 partitions mounted on system boot, actually, and only /home
can be called a data partition untouched by system)
do not experience an hd failure or accidental overwrite just before
I actually consider HDDs consumables, like printer ink/toner.
still SPOFs and “incident waiting to happen”… The same is IMO for
(I do use my laptop offline from time to time, and configure everything
in a way that fits best offline use — except things that are defined
in terms of network interaction, of course)
GUIs/modern installers, they are generally designed to fulfill typical
desktop user expectancy, they play nice, they look nice, after when
released in the wild disaster happen and work to fix new installer
start, resulting in few years in a monster. That’s for instance the
path took by SuSe in the past (Yast) and Ubuntu recently with both
Ubiquity bugs and limitation and new DI replacement…
Again, you are comparing GUIs without any globally consistent concept
system behind them with a plan for something on top of NixOS
configuration system — a configuration that seems to satisfy you
There are real complicated questions about GUI configuration editor. If
you want to prevent GUI installer from happenning, you should look
around the ecosystem and look for a question that is complicated and
interesting and relevant, so that it gets attention and obviously should
be resolved before going on. If there are hard trade-offs, consensus
will just never happen.
If all questions do get resolved reasonably, then the editor will work
fine with underlying ecosystem as it is and won’t create a problem.