Skip to main content

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Visit Stack Exchange
Asked
Modified 1 month ago
Viewed 2k times
10

I have a laptop PC that I use for everyday tasks. I mainly use my PC for software development (so lots of git repos + IDE + development tools), browsing... lots of different stuff actually. I'm using Debian testing with almost all software coming from package repositories (apt).

I would like to build another system (a desktop PC, same CPU architecture, i.e. amd64) that shall have the same data and system configuration: that is, I would like to seamlessly (not necessarily in real time though) switch from the laptop to the desktop (and back) and continue my work there.

I did some research and I found several challenges:

  • different hardware configuration might lead to some stuff being different (kernel version and/or modules, /etc/modprobe.d, /etc/modules-load.d and so on)
  • not everything in the system state might be reflected by the file system (unless I sync a "cold state", e.g. during boot and shutdown)
  • conflict management
  • if the sync is done during boot, what to do if it fails?
  • how to handle installed packages? (i.e. via "manifest files" + redo installs/removals vs sync the actual package contents in the file system + /var/lib/dpkg)

I don't need version tracking, just that the two systems are in sync at some point in time (with possibly only one of them being active at any given time - a third "bridge" system would be needed of course).

Some thoughts, off the top of my head:

  • maybe handle user data (e.g. /home) and the system (/usr, /var, /etc) with separate tools (I'm thinking git vs rsync)
  • near-realtime sync ("hot" sync) vs sync on boot/shutdown ("cold" sync - probably safer?)
  • system-dependant and ephemeral stuff should be excluded somehow

I don't want to use remote desktop: the purpose of this exercise is also disaster recovery, so both systems should be ready to use at a moment's notice.

Of course classic configuration management tools (Puppet, Ansible, etc.) are not suitable for this IMHO because the state of the system changes often (because it's a personal computer for personal use).

Am I overengineering it?

12
  • 2
    What about placing your /home on a shared drive (maybe some external SSD for rsyncing)? Or mount /home during boot from a network shared disk? As for system related files, I don't think it is a good idea to copy files back and forth. It may work or fail in mysterious ways. Just install the same packages using apt.
    td211
    –  td211
    2025-09-01 11:15:46 +00:00
    Commented Sep 1 at 11:15
  • 2
    You might find syncthing useful for some of your $HOME data.
    Mark Setchell
    –  Mark Setchell
    2025-09-01 19:38:13 +00:00
    Commented Sep 1 at 19:38
  • 1
    This is not worthy of an answer, but if you can accept the performance penalty you could use a virtual machine and have the disk image in some shared folder of some sorts.
    Vladimir Cravero
    –  Vladimir Cravero
    2025-09-02 06:42:45 +00:00
    Commented Sep 2 at 6:42
  • 1
    @DanieleRicci well you'd need to be "online" only to sync the disk between the machines. regarding performances, I am hesitant to liquidate this with "too many read writes" - hypervisor software is really really good nowadays and can get tricky only if you need graphic acceleration to be honest.
    Vladimir Cravero
    –  Vladimir Cravero
    2025-09-02 11:44:35 +00:00
    Commented Sep 2 at 11:44
  • 1
    Well keep in mind that a VM is a turnkey solution @daniele_athome - it will just work and is super easy to setup, no need to manage what you sync and what you do not sync or whatever - it literally is the same computer every time you fire it up. It also makes backing up the system trivial.
    Vladimir Cravero
    –  Vladimir Cravero
    2025-09-02 16:26:51 +00:00
    Commented Sep 2 at 16:26

5 Answers 5

10

Starting from the bottom:

maybe handle user data (e.g. /home) and the system (/usr, /var, /etc) with separate tools (I'm thinking git vs rsync)

Right approach! In fact, most distros, debian including, stick to some standards for directories; that allows you to get the same systems if you keep the apt-installed software in sync, the configuration in /etc selectively in sync (there's things like your machine name, some storage configuration, /etc/fstab, host name, your power settings… configured there, and you might want to sync some, but not all of it); /var/lib for mutable but persistent state (from containers to editor plugins).

When you have some non-apt-installed software in /usr/local, you need to look at that individually; generally, /usr/local/etc, and /usr/local/var/lib might be things that are worth syncing.

I don't know whether everything in home (excluding ~/.cache, and similar to /etc, treating ~/.config selectively) is appropriately handled by git – actually, no, it's not. But rsync, restic etc are suitable methods for synchronization.

near-realtime sync ("hot" sync) vs sync on boot/shutdown ("cold" sync - probably safer?)

dangerous for things outside of /home/, because the services of your system will already be accessing files; you don't want to randomly install stuff during bootup or shut down (especially not the latter, because how do you even notice if something went wrong?).

So, I'd go with: "hot" sync user data, but leave software / system sync up to dedicated reboots. Split system sync into two jobs: making sure you have the same packages installed as the other machine, that's a job for apt-get, and syncing /etc and /var/lib, that's a job for rsync.

I'd probably just create a file (e.g., /var/lib/syncnextboot) and have a systemd unit activated at boot, that checks for the existence of said file, exits successfully if it isn't there, and otherwise fetches the package list from wherever, runs apt-get to install that exact list of packages, then rsyncs over the right parts of /etc and of /var/lib, and only after all that has been successful, deletes the sentinel file and exits successfully; otherwise it just exits unsuccessfully (your system is then in what sytemctl would call "degraded state", but that's not a problem other than not being synced).

The /home rsyncing can be done opportunistically as service that depends on Network.target and is launched when online, and checks for reachability of the coordination server.


This all sounds like you might like layered, immutable-base operating systems, which make stronger guarantees on the ability to take all changes to a system and transport them anywhere.

In the Fedora world, such an OS would be Fedora Silverblue. I'm not sure whether there's an equivalent debian-based OS. Wikipedia suggests "Endless OS", but I'm always wary of distros that I haven't heard of; they typically have problems that noone has heard of before, too ;) But you could give it a try if all the software you need comes with it or is available as flatpak, or if you're not married to debian, try Silverblue (which, unlike Endless not only allows you to install software via flatpak, but also to define software layers, which you can modify to install "normally" packaged (i.e., installed via dnf/rpm, fedora's apt-get/dpkg) software).


different hardware configuration might lead to some stuff being different (kernel version and/or modules, /etc/modprobe.d, /etc/modules-load.d and so on)

mostly not a concern, usually solved by simply selectively excluding the packages bringing the graphic cards drivers for the laptop on the workstation and vice versa.

not everything in the system state might be reflected by the file system (unless I sync a "cold state", e.g. during boot and shutdown)

But that's also not state that makes sense to replicate, so, don't care!

conflict management

Hard because hard. I'd advocate for having the self-discipline to not uninstall software using apt on one device while installing it on another. The rest might be fairly rsyncable.

how to handle installed packages? (i.e. via "manifest files" + redo installs/removals vs sync the actual package contents in the file system + /var/lib/dpkg)

Yes! Core problem! Luckily, as linked to above, keeping the packages in sync is the core solution to the core problem, and apt-get and dpkg make that easy.

3
  • 1
    Thanks for the insights! Your answer gave me quite some ideas to continue exploring this. I like your way of doing the OS app&config sync on demand, so I can keep it under control (up to a point, of course, but it's surely safer than doing it automatically at every boot). I love Debian and I'd rather stay on Debian - I know I would probably greatly benefit from immutable distros, but I'd rather not change my main distro (several years of knowledge eheh) for this.
    daniele_athome
    –  daniele_athome
    2025-09-01 12:58:24 +00:00
    Commented Sep 1 at 12:58
  • @DanieleRicci that sounds very reasonable!
    Marcus Müller
    –  Marcus Müller
    2025-09-01 13:16:21 +00:00
    Commented Sep 1 at 13:16
  • I'm going to accept this answer because it's the closest to what I'm trying to achieve. I will update the question with my project when I will open source it.
    daniele_athome
    –  daniele_athome
    2025-10-14 09:33:06 +00:00
    Commented Oct 14 at 9:33
3

A modern addition to what has been said: You can build upon the UBlue project, which itself builds upon Fedora's atomic desktop images (aka Fedora Silverblue or Kinoite etc.).

Of course data and configs may need to be synced with a method you wish (e.g. for user dotfiles/configuration using a GitHub repository or special apps like chezmoi or dotz), but for system apps and system adjustments you can easily built your own system image with UBlue.

So for any system adjustments you might want, you can use a cloud-native approach and fork a GitHub project, where everything to building an image is automated. You can then share and use this image with all your systems. Furthermore, you can also flexibly create and build it upon different base images like Bazzite for gaming. And you will keep all (security) updates form "upstream" as it just works with a Dockerfile building upon existing images.

If you're not just an "end-user" and want to do it more low-level, you can of course also define your own "bootable containers" with bootc, which is the tool this is all based upon.

5
  • Nice approach, but... I don't know, it seems like it would be quite an effort to maintain. It seems overkill for just two machines. Also I'm a heavy tinkerer: I'm afraid this approach could have limitations for me due to the fact that they consider the base OS immutable, right?
    daniele_athome
    –  daniele_athome
    2025-09-02 14:38:29 +00:00
    Commented Sep 2 at 14:38
  • Both is not actually true, as far as I know (never tried it in practice). Maintain: Well, unless your modifications break something with the base image (and if you do, you'd have to fix it anyway), the CI should automatically publish new updates, and about tinkering: Yes, AFAIK you can do everything you can do in a Dockerfile for your image aka everything you can do what Linux can do basically. That does not sound like a big limit. :D
    rugk
    –  rugk
    2025-09-02 20:46:02 +00:00
    Commented Sep 2 at 20:46
  • Also se the GitHub Readme has community examples to see what's possible.
    rugk
    –  rugk
    2025-09-02 20:46:59 +00:00
    Commented Sep 2 at 20:46
  • Ah and "immutable" is a term they do not use that often anymore, because "atomic" catches it better: The system can of course be changed and extened, especially in the base image. But once you deliver it (in a container) to the end user, there it is kind of immutable – there are still ways to anyway tinker with it and layer packages or whatever, but in the end you'd want to include them in the base image, e.g. More information here.
    rugk
    –  rugk
    2025-09-02 20:59:33 +00:00
    Commented Sep 2 at 20:59
  • 1
    That's what I meant with "effort to maintain": the Dockerfile, the CI stuff... I mean it would be surely great for maintaining hundreds of machines, but for just 2 machines used by 1 person, it only adds stuff to do. Or I would need to radically change the way I manage my personal computer and "build it in" into my habits. I might suggest it to my employer though, maybe we can finally ditch Windows :-)
    daniele_athome
    –  daniele_athome
    2025-09-02 21:22:49 +00:00
    Commented Sep 2 at 21:22
2

For user data synchronization, I have found that syncthing works well. It practically instantly detects changed files, so it is possible to rapidly switch from one computer to another and continue working on the same files.

Myself I have found syncing user files sufficient and haven't bothered synchronizing the system. This does result in some extra maintenance burden, so I understand the desire.


For system files, I would recommend declaring one system as primary, and do all the major system changes there. If you use btrfs or zfs, you can setup snapshots of the system state. To synchronize the two systems, you would have a script that does:

  1. In initial setup, create prev-sync on primary system from current state, and use btrfs-send to transfer it in full to the secondary system. Copy prev-sync on secondary to active and configure system to use that as the root partition.

Then for each incremental synchronization:

  1. Take a new snapshot on primary, next-sync (replacing any interrupted one if such exists).
  2. Use btrfs-send to transmit difference between next-sync and prev-sync to secondary system.
  3. Rename next-sync to prev-sync on both systems.
  4. On secondary system, rename old active to prev-active and copy prev-sync to active.
  5. Secondary system will continue to run on the old state until next reboot. You don't have to reboot immediately.

Benefit of btrfs-send or zfs-send is that it doesn't have to spend time searching for differences in files. The differences are readily available thanks to the copy-on-write mechanism in the snapshots. For send/receive to work, you always need to have the unmodified prev-sync available on the receiving side. If you do local modifications to active on the secondary system, they will only be present until next sync.

With snapshots, you can have a second Grub boot option that instructs kernel to mount prev-active instead. That way you always have a quick way to recovery if a sync fails.

1
  • Interesting approach... thanks, I'm going to research btrfs/zfs and consider it for my use case.
    daniele_athome
    –  daniele_athome
    2025-09-03 08:15:46 +00:00
    Commented Sep 3 at 8:15
1

In the details, laptops and desktops are different (not the same processor, not the same optimizations, perhaps not the same kernel and compiler versions, often not the same graphical cards and screen sizes).

But as a software developer you mostly use some version control (e.g. git) on the software you are coding.

I'm in the same position as you and just use git to maintain the same source code base on both machines.

Of course it really depends on the kind of software you code. For an example OpenCL source code is extremely dependent on the hardware.

And for GUI or web development, you could have different fonts installed on the two computers. And different preferred window sizes or colors.

Also two different Linux distributions have different kernels, different libraries versions, different compilers or debuggers etc.

For multithreaded applications the number of cores also matters.

PS.People ask me about my afiliation. I am retired but do want to teach part-time. And am coding RefPerSys. It is none-sense that stackoverflow cannot be contacted by email or web forms.

1
  • Not a proper answer to the question, but you provided useful insights on what to keep in mind when considering exclusion lists for the sync operations (i.e. differences between the two systems). Thanks!
    daniele_athome
    –  daniele_athome
    2025-09-03 09:32:32 +00:00
    Commented Sep 3 at 9:32
0

Possibly define the whole thing as a VM and migrate it between the two as needed. This would abstract the hardware differences away to an extent. Not sure how well X/Wayland work virtualised...

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.

Morty Proxy This is a proxified and sanitized view of the page, visit original site.