I'm a fairly skilled Linux user but I'm also a human being, so one late evening while upgrading my server I inadvertently ended up wiping it.

That's when I decided that I'll never waste my time installing another server, and instead build something that lets me focus on what matters.


Ready, Boot, Docker!

A purely zero-installation and zero-configuration operating system that boots a server from bare-metal and straight into an operational Docker Engine would eliminate many tedious administrative tasks.

The effortless setup with preinstalled and preconfigured core software is only part of the solution. The entire system is immutable, so it cannot be modified, which increases security, removes the hassle of keeping up‑to‑date, and thereby ensures zero-maintenance altogether.

With the immutable system image running off the boot device, all custom configuration and data are stored on a dedicated, writable persistence device. Overlays virtually merge the persisted directories on top of the immutables. This absolute data segregation, combined with the system essentially being an information-less, static copy, makes backup, restore, reinstall, and provisioning straightforward.

Whether driven by the intent to run a minimum-friction home server and just focus on the content, or run an energy-efficient Edge server to take back control of sensitive data, or perhaps be a cloud-stormer and run an on‑premesis cluster, the solution could quite well be this minimalistic easy-to-use immutable Linux-based single-purpose and streamlined bare-metal server operating system named Lightwhale!

Getting Started

The fastest way to get started is to boot Lightwhale off a USB flash drive on the actual hardware.

The easiest way to experiment with Lightwhale is use a virtual machine like QEMU. This gives faster turnaround when rebooting while testing e.g. immutability and persistence.

This guide uses QEMU and includes Linux commands that can be copied and pasted directly into a shell for convenience. Alternative instructions are available elsewhere.

Download Lightwhale

Start by downloading a Lightwhale image from the release repository. For QEMU, get the latest BIOS image:

wget https://lightwhale.asklandd.dk/download/latest-bios

Booting into QEMU

1. Install

Install QEMU.

sudo pacman -S qemu-system-x86 # On Arch/Endavour sudo apt-get install qemu-system-x86 # On Debian/Ubuntu
2. Boot

Boot Lightwhale in QEMU, setup port-forwarding from localhost to Lightwhale's SSH and HTTP services:

qemu-system-x86_64 \ -enable-kvm \ -m 2G \ -drive file=lightwhale-1.3.0-rc26.bios.iso,format=raw,if=virtio \ -device virtio-rng-pci \ -net nic,model=virtio \ -net user,hostfwd=tcp::10022-:22,hostfwd=tcp::10080-:80

Booting from USB Flash Drive

This requires a USB flash drive with enough capacity to store the Lightwhale image file. Note that any existing data on it will be overwritten and permanently lost.

1. Insert

Insert the USB flash drive and identify what device it was assigned, e.g. /dev/sdx.

2. Write

First unmount the device for safety, then write the image to it:

sudo umount /dev/sdx? sudo dd conv=fsync status=progress \ if=lightwhale-1.3.0-rc26.bios.iso of=/dev/sdx
3. Boot

Boot the computer off the USB flash drive, adjust computer boot order as necessary.


Now your Lightwhale server is up and ready for use.

Using Lightwhale

The default Lightwhale account is op with password opsecret.

After logging in, it is Docker business as usual; docker and docker compose are conveniently aliased by d and d-c respectively — for your immersive Docker-centric pleasure.

At this point Lightwhale is immutable, allowing safe experimentation; nothing is stored permanently, and rebooting will completely restore the system to factory settings.

Following are some recommended first steps and common use-cases.

Again, the included commands are intended to be copy and pasted into a Linux shell. Furthermore, they are specifically targeting a Lightwhale server running in a local QEMU, but the commands here may serve as inspiration for other setups.

Log in using SSH

Using QEMU port-forwarding, connect to the host port 10022 which forwards to Lightwhale port 22 where SSH is served:

ssh -p 10022 op@localhost

For security reasons, it is important to change the op password before connecting Lightwhale to the internet, because now everyone knows your password.

Or even better…

Disable SSH Password Login

To harden security and increase convenience, it is recommended to disable SSH password logins and instead only authenticate using SSH keys.

1. Generate SSH key pair on host machine

Chances are you already have these, but otherwise run:

ssh-keygen -t ed25519 -b 4096 \ -C "This Key belongs to $USER@$HOSTNAME"
2. Copy SSH key from host machine to Lightwhale
ssh-copy-id -p 10022 op@localhost

This will prompt for a password for the login.

In the event that ssh warns about remote host identification change, this is most likely because Lightwhale was rebooted and its SSH server auto-generated a new key during startup, which are not unknown to the host machine. Simply delete the old host identification entry in ~/.ssh/known_hosts and then retry ssh-copy-id:

ssh-keygen -R [localhost]:10022
3. Log into Lightwhale using host SSH key

This should not (and never again) prompt for password:

ssh -p 10022 op@localhost
4. Reconfigure Lightwhale SSH service to disable password logins

Now, on Lightwhale, modify /etc/ssh/sshd_config to disable SSH password logins by adding PasswordAuthentication no to the configuration.

sudo sed '/^#PasswordAuthentication/a\ PasswordAuthentication no # Disable, ie. only allow pubkey to auth.' \ -i.orig /etc/ssh/sshd_config nohup sudo /etc/init.d/S50sshd restart

When restarting sshd for the changes to take effect, notice how this cuts the branch you're sitting on ;)

Connect to a WiFi Network

Lightwhale will automatically attempt to connect to a wired LAN using DHCP, but never to wifi, unless configured to do so.

Use setup-wifi to connect to a wifi network given its SSID and password or get detailed help:

sudo setup-wifi --ssid YOURSID --password YOURPASSWORD --country DK sudo setup-wifi --help

Start a Web Server

A nice way to take the Docker Engine of Lightwhale for a quick spin, is to start a container with a web server and request its contents.

1. Create a simple index.html to serve
echo "Hello, World! This is Lightwhale!" | \ sudo tee /var/www/index.html
2. Start the web server container
d run -d -p 80:80 -v /var/www:/public iorivur/darkhttpd

In the event that Docker fails to lookup the registry, simply retry the command.

3. Query the web server

Either do the query locally, or like here from the host machine against Lightwhale in QEMU:

curl http://localhost:10080/


This concludes our tour.

Now would be a good time to reboot your immutable system, and observe how all your changes are reverted, and a fully restored with the next system startup.

Then read on to learn why Lightwhale is immutable, and later how to enable persistence.

Immutability by Design

Ah, yes! This is what distinguishes Lightwhale from other server operating systems.

So, a clean server without the ability to install software and retain data across reboots is useless. This is why in practice one always combines the two core principles of Lightwhale: immutability and persistence. They are quite contradicting concepts, but work wonders when combined — like ☯.

Advantages of Immutability

Immutability is implemented by using a read-only root file system, or rootfs. And looking isolated at immutability, this instantly brings a number of advantages into the system:

Zero installation

Since the immutable rootfs cannot have anything added to it, all necessary software has been preinstalled with sensible defaults from the start, primarily docker and sshd. In the end this results in a fully self-contained image that one can just write to a boot media and then live-boot, similar to a video game cartridge in principle.

Gone are the tedious and error-prone installation sittings where one has to decide on a partition scheme, format file systems, select and install software, install the bootloader, and finally reboot into a system that still needs configuration before it is usable. And that is just the installation…

Zero maintenance

With all necessary software preinstalled, installing extra software is not required. And since that preinstalled software already works, updateing it is not required either.

No more package managers, package dependencies, and the race of staying up to date.

Dreadful operations like reinstall, system restore, or factory reset are effectively accomplished by simply rebooting.

Reduced file system attack surface

The rootfs is inherintly resilient to both unintentional and malicious modification. One cannot accidentally delete something from the rootfs, and a virus cannot modify files here either.

On a traditional system all files are live and some very important. The presence of e.g. /bin/sh, /lib/libc.so.6, and /usr/bin/[ may be taken for granted, but if one is suddenly deleted, the system will suffer.

For example, this command would wreck most system, but is harmless to Lightwhale:

[ -f /etc/lightwhale-release ] && sudo rm -frv /bin /lib /boot
No dust

A common issue with long-living installations is the accumulation of unsused files. Junk like this take up disk space and pollute backups, and is fortunately impossible on a read-only file system.

It's just a copy

The entire system image is really just a copy of a static image. It holds no information itself, so if the boot device breaks, toss it and write a new one.

Reduced power usage

Lightwhale has even taken steps towards being environmentally friendly and sustailable as a platform.

First if all, it only runs a bare minimum of system services, which reduces cpu load and thereby power consumption.

Futhermore, once the rootfs has been loaded into memory it stays there. The boot media is never accessed again until next reboot. This greatly reduces wear and tear of of physical boot devices, especially USB flash drives or eMMC because no writes ever occur. But it also allows devices to spin down and/or enter low-power mode. If Lightwhale is network-booted, it may over-all lower power consumption even further because no boot media exists.

Free to experiment
Don't hold back, feel free to explore and experiment, you can't break it.

Booting into Immutability

When Lightwhale is booted off the boot device, first the Linux kernel loaded and started, then the rootfs is loaded and mounted into memory, and everything continues to initialize and eventually run off here. The rootfs is a squashfs which is compressed to save memory, and files are decompresses run-time when read.

As mentioned, its immutability gives some amazing advanages, but most read-only systems mix in some kind of writable file system in order to function. Otherwise the SSH server cannot save its host keys, network configuration and logs cannot be saved, and one cannot change the default user password to name just a few problems.

Lightwhale uses a volatile, memory-based tmpfs to support adding new and modifying existing files in selected directories. The tmpfs is mounted by default if persistence is not enabled, and all changes are lost when the system reboot.

But loosing passwords, keys, containers, and volumes during a reboot is unacceptable. This is where Lightwhale persistence comes in.


In order for customized configuration and data to be retained across reboots, a physical persistence device is required. Typically an SSD or HDD, but a USB flash drive is sufficient for testing purposes. Note the persistence device cannot be the same as the boot device, rather it must be a separate and dedicated device that Lightwhale has full disposal of.

Lightwhale fully automates device detection, partitioning, formatting, mounting, and making sure only the data and modified configuration is persisted; the only manual step is to actually enable persistence.

Enable Persistence

Simply write the magic header to the desired storage device to be used as persistence device. Writing the magic header is easy, and can be done from Lightwhale itself once the device has been identified, e.g. /dev/sdx:

lsblk echo "lightwhale-please-format-me" |\ sudo dd conv=notrunc of=/dev/sdx

Now reboot, and Lightwhale will automatically claim the device for persistence.

Claim Persistence

Early during system initialization, Lightwhale will attempt to claim persistence though a series of steps, implemented in the setup-persistence script:

1. Scan for magic header

Scan all available block devices for a magic header, specifically this sequence of bytes at the very start of the device: lightwhale-please-format-me

If found, it is consider a magic device.

2. Create persistence partition

If a magic device was found, it will be partitioned, formatted with ext4, and labeled lightwhale-data.

This becomes the actual persistence partition.

3. Mount persistence partition

Scan for the persistence partition by its label, lightwhale-data. If found, mount it under /mnt/lightwhale-data.

At this point, persistence is achieved only within that specific mount point; files saved here will be available after rebooting Lightwhale. Once the lightwhale-data partition has been created, subsequent bootups will effectively jump directly to this step.

4. Create overlay file system

If a persistence partition (or tmpfs) was mounted, merge it into an overlay file system, and thereby ensure persistence (or volatile read-write) of selected directories.


If a persistence partition was not found in step 3 during claim, then a clever but tricky strategy automatically kicks in:

A tmpfs is mounted on the same /mnt/lightwhale-data mount point, where the persistence partition would otherwise have been mounted. This provides a writable file system while seamlessly mixing into the overlay file system to provide the exact same per-directory writablity that real persistence does.

Of course, being a volatime tmpfs, all changes are lost during power-off.

Overlay File System

Lightwhale adds persistence (or tmpfs) on a per-directory basis by mounting an overlay file system on top. The overlay virtually merges layers of existing directories, while keeping their individual content separate.

Effectively, the overlay mirrors the original immutable file system, while modifications, new files, and even deletions made to it, will override the original content, and be kept only on the persistence file system (or tmpfs).

Per-Directory Persistence

Persistence (or tmpfs) does not cover the entire file system, but is limited to the following essential directories only:

All custom system configuration. When Lightwhale boots with persistence, the persisted /etc is overlaid early in the boot process customized configuration take effect.
All user home directories.
Standard web root. Not strictly necessary, but convenient for running an ad-hoc web server.

All Docker runtime files including downloaded images, containers and their state, configured networks, and managed volumes.

Actually, there is a technically that prevents this directory to be overlaid because Docker requires it for an internal overlay itself. So instead, Lightwhale automatically reconfigures Docker to use /mnt/lightwhale-data/docker for its data root.

Directories like /bin, /usr, and /lib are deliberately not persisted. Modifying them would be considered "abuse" of the system, since installing custom programs directly on the root file system defeats the very purpose of using Lightwhale. One should add software though Docker containers, which can also include system tools like an updated git.

Absolute Data Segregation

A more classic approach would be to mount, for example /etc, on a different disk to separate config from system. However, remember that this requires a fully populated directory structure to be available on the that mount point, including files that were never changed from the original system.

The strict separation of the system on the boot device and customized configuration and data on the persistence device, virtually merged by an overlay, gives Lightwhale its amazing perk of absolute data segregation.

Testing Persistence i QEMU

This is easy, interesting, heck, you might even find it fun!

1. Create persistence device file

Create an empty file for virtual persistence device:

dd if=/dev/zero of=persistence.img bs=1M count=700
2. Write magic header

Write magic header to persistence device file so it will automatically be claimed for persistence during next boot:

echo "lightwhale-please-format-me" |\ dd conv=notrunc of=persistence.img
3. Start QEMU

Start QEMU with the virtual persistence device file attached:

qemu-system-x86_64 \ -enable-kvm \ -m 2G \ -drive file=persistence.img,index=0,format=raw,media=disk \ -drive file=lightwhale-1.3.0-rc26.bios.iso,index=2,format=raw,media=cdrom -boot d \ -device virtio-rng-pci \ -net nic,model=virtio \ -net user,hostfwd=tcp::10022-:22,hostfwd=tcp::10080-:80

During startup, notice how Lightwhale claims the device and creates the lightwhale-data persistence partition, and overlays the essential directories.

4. Copy SSH key from host machine to Lightwhale

Copy SSH user key from host to Lightwhale in QEMU and enter password, after removing any old offending SSH key:

ssh-keygen -R [localhost]:10022 ssh-copy-id -p 10022 op@localhost
5. Log into Lightwhale using host SSH key

This should not (and never again) prompt for password:

ssh -p 10022 op@localhost
6. Start a Web Server

Start a web server and test it locally:

echo "Hello, World! Lightwhale has persistence enabled!" | \ sudo tee /var/www/index.html d run -d --restart unless-stopped -p 80:80 -v /var/www:/public iorivur/darkhttpd curl http://localhost
7. Reboot
sudo reboot

Now, normally this would completely reset Lightwhale, and the SSH server host keys, the authorized SSH user key, and the web server would be erased from memory. But we have persistence enabled…

8. Test Web Server

Query its web server from the host:

curl http://localhost:10080/

This returns the index.html, which confirms that both data and container were persistend.

9. Test SSH

Connect with SSH from the host:

ssh -p 10022 op@localhost

This logs in with out password prompt, which confirms that the SSH server host keys and user SSH key were persisted.