Building a home network lab is a fantastic way to gain hands-on experience with various technologies, test configurations, and explore new concepts. However, setting up a physical lab with multiple machines can be cumbersome, expensive, and space-consuming. This is where Docker, a popular containerization platform, comes to the rescue. By utilizing Docker in your home network lab, you can streamline the setup, optimize resource utilization, and experiment with different network scenarios seamlessly.

There are several benefits and practical applications of Docker in a home network lab. As soon as I discovered Docker I understood how it has revolutionized the way we build and manage networks and I want to include it in my home network. I personally use it for a lot of services inside a small home network as a central ad-blocker, an uptime monitor, Home Assistant, an MQTT broker, a Time-Machine backup, and an NVR.

Mini PC

Due to the last energy cost increasing may results to be not so sustainable to maintain turned on 24/7 a sever only for hobby purposes. However, post-pandemic events lead also to a large amount of mini pc available on the second-hand market. During the first Covid19 years, everyone starts to work remotely and now you could find a lot of mini PC for very reasonable prices. Mini PC seems a very good idea to build and start testing several dockers and so I did.

Dell Wyse Server Room Docker
The small Dell Wyse is under surgery
Dell Wyse Server Room
It plays a fundamental role in the home network lab renovation. You can find more in the corresponding article: Network home lab

Debian 11

I started with a small Dell Wyse. I bought it on the second-hand market for a very low price. First of all, I changed the main drive with an SSD to speed up every process. To host all the services I choose the Debian OS where I can install the Docker engine and much more.

Without going into the details you can enable the ssh root login through the SSH server configuration. Open /etc/ssh/sshd_config and change the following line:

PermitRootLogin without-password
PermitRootLogin yes

Then you can restart the SSH service and log in remotely to input all the following commands.


Docker is a set of platform-as-a-service products that use OS-level virtualization separating the environments into containers. The main Docker core is the Docker Engine that hosts all the containers. The software was developed in 2013 and it is now a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments.

I installed the Docker Engine together with the Composer with the following input:

sudo apt-get install docker-ce docker-ce-cli docker-buildx-plugin docker-compose-plugin

Moreover, I would a graphics interface to interact with the several containers and I choose Portainer. Install it, as for all the software in the containers, requires a very simple procedure. Here, Docker’s magic happens: you have only to input only a single command:

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Now you can go to http://debianHost:9000 to log in and configure your graphical interfaces.

Portainer – GUI for Docker containers


The second container that I was really interested in trying was Pi-hole, a DNS server that could act as an ad blocker for your entire network.

DNS servers, or Domain Name System servers, play a crucial role in the functioning of the internet. They act as a directory or a phonebook of sorts, translating human-readable domain names (such as into their corresponding IP addresses (like When you enter a website’s domain name in your web browser, your device contacts a DNS server to obtain the IP address associated with that domain.

DNS servers are distributed across the internet and work together in a hierarchical system. When a DNS server receives a request for a domain name, it first checks its own cache to see if it has the corresponding IP address stored. If not, it contacts other DNS servers, moving up the hierarchy, until it finds the IP address and returns it to the requesting device.

DNS resolution

Pi-hole is a popular ad-blocking solution that operates at the network level. It functions as a DNS sinkhole by acting as a DNS server for your local network. When a device on the network requests any content that contains ads, Pi-hole intercepts the DNS query and checks if the requested domain is on its blacklist of known ad-serving domains.

By blocking ads at the network level, Pi-hole offers comprehensive ad-blocking coverage for all devices connected to the network, regardless of the browser or application being used. It helps reduce bandwidth usage, speeds up webpage loading times, and provides a cleaner, ad-free browsing experience across multiple devices simultaneously.

Lan integration

I know that Pi-Hole is born to be installed on Raspberry Pi however a docker version is available, and I’ll use it. After creating the pi-hole root directory you need to create the docker-compose.yaml file as follows:

version: "3"
    container_name: pihole
    image: pihole/pihole:latest
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "80:80/tcp"
      TZ: 'America/Chicago'
      # WEBPASSWORD: 'set a secure password here or it will be random'
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'
      - NET_ADMIN
    restart: unless-stopped

Be careful to configure the web admin port, the secure WEBPASSWORD and the time-zone, for me: TZ:'Europe/Rome'. And after saving it, the Docker’s magic happens again:

sudo docker-compose up -d

I also use the pfSense software as the main router so I need to redirect the main DNS server linking to the Pi-Hole IP. If you are interested you can find more information on the Pi-Hole documentation. I can log in at http://debianHost:8080/admin/login.php and Pi-Hole works.

Test with Pi-Hole not working from
Test with Pi-Hole working from

Pi-hole leverages publicly available blocklists of known ad-serving domains, which are continuously updated to stay up-to-date with new ad sources. Additionally, it provides options for users to manually add domains to the blacklist or whitelist certain domains if desired. For instance, I selected several lists from

Pi-Hole web page


Time Machine is a backup software application introduced by Apple Inc. for its macOS operating system. It is designed to help users easily back up and restore their data, primarily focusing on protecting files, documents, and system settings.

To use Time Machine, you need an external storage device such as an external hard drive or a network-attached storage (NAS) device. This drive will serve as the backup destination where your files and data will be stored. Once you connect the backup drive to your Mac, you can set it up as the Time Machine backup destination. macOS will prompt you to configure Time Machine, or you can access it through System Preferences. You can select the backup drive and choose which files and folders you want to include in the backup.

Backup history

Time Machine performs automatic and continuous backups in the background and keeps multiple versions of your files and documents. It creates snapshots of your system at different points in time, allowing you to go back in time and retrieve earlier versions of files or restore your system to a specific state.

Apple Time Machine simplifies the process of data backup and restoration for macOS users. By automatically and continuously backing up your files, it provides a convenient way to protect your data and recover lost or modified files easily.

To do that I have to create a network device that could be seen by the Time Machine software. I added an external 5TB hard drive to the mini computer and mounted it inside the Debian server:

$ sudo mount <device> <dir>

Remembering also to add it permanently by editing the fstab that stores the static information about filesystems:

$ cat /etc/fstab
# /etc/fstab: static file system information. 
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# <file system>              <mount point>         <type>  <options>          <dump>  <pass>
UUID=b9df59e6-c806        /                     ext4    errors=remount-ro  0       1

Then, I installed the timemachine docker image by odarriba. Of course, I had to link the time machine images path to the previously mounted drive.

$ docker exec timemachine add-account USERNAME PASSWORD VOL_NAME VOL_ROOT [VOL_SIZE_MB]

An extra step is required to enable the autodiscovery functionalities.

Local server discovery

To enable the autodiscovery the Avahi service is required. It is a software package that provides a system-wide service discovery mechanism on Linux and other Unix-like operating systems. It implements the Zeroconf (Zero Configuration Networking) protocols, also known as Bonjour or mDNS (multicast DNS), which enable automatic network configuration and service discovery in local networks without the need for manual configuration or central servers.

Then, to enable it little changes are required to the avahi/afpd.service file:

<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<name replace-wildcards="yes">%h</name>


Another interesting service that I use is Uptime-Kuma. It is a self-hosted monitoring tool that allows you to monitor network devices as well as any server through the web address. I used it to monitor my network connection, my website and all my LAN devices status.

Uptime Kuma checking for HTTP temperature sensor response different from “nan”, i.e. the sensor GET response should be a number meaning both the web server and temperature sensor work well.

The installation is very simple thanks to the Docker magic:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1

Home Assistant

And we can finally talk about the real reason that explains why I installed this new micro-server with the Debian OS and the Docker Engine: I would like to transfer all my smart-home service to Home Assistant. There will be a dedicated article on it!

Home Assistant is an open-source home automation platform that allows you to control and automate various smart devices and services within your home. It acts as a central hub for managing and integrating a wide range of smart home technologies.

Mosquitto broker

Clearly, the smart home devices require also an MQQT broker. Mosquitto is an open-source message broker that implements the MQTT (Message Queuing Telemetry Transport) protocol. It is commonly referred to as the “Mosquitto broker” due to its primary function as a broker for MQTT-based communication.

Mosquitto broker is written in C and is available for multiple platforms, including Linux, Windows, macOS, and others. It is widely used in IoT and messaging applications, providing a lightweight and efficient solution for MQTT-based communication. Its installation follows the same procedure for the Docker container but I described it in the Home Assistant article.


Another service that I’m interested in is Frigate. It is an open-source software solution designed for video surveillance using machine learning and computer vision techniques. It focuses on analyzing video streams from security cameras to detect and alert users about specific events or objects of interest. Frigate aims to provide a flexible and customizable video surveillance system that can be self-hosted and integrated with various camera models.

Source: Frigate docs.
Source: Frigate docs.

I’m not sure if this mini server could be able to satisfy the requirements however I’m very interested in the Frigate’s features like object detection, event detection with notifications, rules and zones, face detection, and a great extensibility.

Web server

In the future, I’ll also install a Flask web server. I discovered it on CS50 and I would like to test it a little more. It s a popular and lightweight web framework for building web applications using the Python programming language. It is known for its simplicity, flexibility, and ease of use, making it a preferred choice for developing small to medium-sized web applications and APIs.

And of course on Debian we have Apache. The Apache HTTP Server, commonly known as Apache, is one of the most popular web servers worldwide. It provides a robust, secure, and efficient platform for serving web content. Apache HTTP Server supports various operating systems and is highly extensible through modules, enabling customization and integration with different technologies

For now, I used Apache to implement a little web page on the server address that could easily redirect me to all my LAN devices/services.