How to deploy Docker containers on remote servers

  • Using Docker Compose and SSH or GitHub Actions simplifies deploying containers to remote servers and makes updating services easier.
  • Tools like WSL 2, VS Code and Dev Containers allow development in remote Docker environments with a near-local experience.
  • Plesk and Portainer offer web interfaces for managing local and remote Docker hosts, Compose stacks, volumes, and images.
  • With VNC/noVNC and Caddy, it is possible to run graphical applications in remote containers and access them securely from the browser.

Deploying Docker containers on remote servers

Working with Docker containers on remote servers It has become the bread and butter of anyone wanting to deploy modern applications without getting bogged down in dependencies, library versions, and the classic "it works on my machine." However, when we move from running a simple docker run Going from setting up a serious deployment locally on a Linux server, with Docker Compose, GitHub shares, Plesk, Portainer or even graphical applications accessible via browser, things get a bit more complicated.

If your goal is to deploy Docker containers on a remote server (Ubuntu, Debian, Windows with WSL 2, a Cloud Server, Plesk, etc.) and do it in a maintainable, automated and secure way, in this guide you have a fairly complete journey: from the basic use of Docker Compose remotely, to development environments with VS Code, deployments from Plesk, administration with Portainer and remote execution of graphical applications using noVNC and Caddy.

Basic concepts: Docker containers and remote deployment

Docker is a container platform It packages an application along with everything it needs (libraries, dependencies, binaries, minimal system configuration) so that it runs the same on any machine with the Docker engine installed. The key difference compared to a virtual machine is that the container doesn't include a complete operating system; instead, it shares the host kernel, resulting in lighter images and better performance.

To deploy Docker containers on remote servers Typically, you have a host (for example, an Ubuntu server in the cloud) with Docker and, optionally, Docker Compose, and then send the code or images to it to run. You can do this manually via SSH, automate it with GitHub Actions, or integrate it with panels like Plesk or tools like Portainer.

The main real-world scenarios you will encounter When you talk about "remote Docker," there are three aspects: local development but running containers on another machine (or in WSL 2), automated deployment of backend/frontend services, and management of production containers (monitoring, logging, restarts, network policies, etc.). The technology is the same; what changes is how it's orchestrated.

In addition to traditional backend servicesDocker allows for something less well-known but very powerful: running graphical applications (email clients, IDEs, analytics tools, etc.) inside remote containers and accessing them from a browser using VNC over WebSocket. It's a convenient way to leverage powerful servers when your PC isn't powerful enough.

From docker run to Docker Compose on a remote server

Using Docker Compose on a remote server

A fairly common pattern of manual deployment It consists of having a code repository on GitHub, a remote Ubuntu server with Docker installed, and a CI/CD workflow (for example, GitHub Actions) that does the following when you push to a branch like development o main:

  • Connect to the remote server via SSH.
  • Stop and remove running containers.
  • Download the new images from Docker Hub (or from your private registry).
  • Run docker run for each service.
  • Let Nginx (or the reverse proxy you use) redirect traffic to the ports of each container.

When you switch to Docker Compose The flow is greatly simplified, because instead of managing containers one by one, you define your entire stack (frontend, backend, database, cache, etc.) in a single docker-compose.yml, with its networks, volumes and environmental variables.

The simplest (and quite common) practice with GitHub Actions The workflow should perform a cd to the project directory on the remote server (where your docker-compose.yml) and execute commands such as:

  • docker compose pull to bring the latest images.
  • docker compose down to stop and remove old containers (optionally with --remove-orphans).
  • docker compose up -d --build if you build images from the server itself.

This approach works well and is quite robustProvided you have good control over credentials, environment variables, and volumes, you can improve security by preventing GitHub Actions from having full root access to the server, restricting SSH keys, using dedicated users, or even exposing Docker as a secure remote service with certificates, instead of running everything over SSH.

There's no "magic way" much better than this. For a simple environment: the key is to automate and write well the docker-compose.ymlUse immutable image tags (e.g., specific versions) and have some rollback mechanism (e.g., saving the previous version of the compose or the images).

Remote development environment with Docker, WSL 2 and VS Code

Remote Docker with VS Code and WSL 2

On Windows, it is very common to develop with Docker using WSL 2.Docker Desktop for Windows offers a WSL 2-based engine that allows you to run Linux and Windows containers from the same machine, while editing code with VS Code and testing in the local browser.

The typical workflow for setting up a development environment with remote containers using WSL 2 is:

  1. Install WSL 2 and a Linux distro (Ubuntu, for example).
  2. You install Docker Desktop on Windows and enable the "Use WSL 2-based engine" option in Settings > General.
  3. In Settings > Resources > WSL Integration, you select the WSL distributions on which you want Docker to work.
  4. You check the installation with docker --version and running docker run hello-world within the WSL distro.

To develop “within” the containers And it's not just about using Docker as the engine; VS Code is key. With the WSL, Dev Containers, and Docker extensions, you can do things like:

  • Open your project folder hosted within WSL directly in VS Code.
  • Reopen that folder “in a development container” (Dev Container), using a Dockerfile or with a devcontainer.json that describe your ideal environment (Python version, Node, etc.).
  • Debug your application from VS Code while it's running in the container.

A very common example It's working with a Django or Node.js project: you clone the repository within WSL, open the folder with code .You select a Dev Container definition (for example, “Python 3”) and VS Code builds the image and starts the container with all the dependencies. From there, you can run, debug, and verify that the code runs on Linux even if your host system is Windows.

This approach is also useful when your machine isn't very powerfulBecause you can move part of the load to a remote server with Docker and connect to it with VS Code via SSH and Dev Containers, working almost as if it were local but relying on the server's resources.

Deploying applications on a cloud server with Docker and Docker Compose

Set up a cloud server with Docker ready for deployment It's very fast with most providers: you choose a system image that already has Docker pre-installed, wait a few minutes, and your machine is ready to receive containers.

A typical pattern for deploying a simple Node.js application It would be this:

  1. Create the Node.js project (for example, a "Hello world" with Express) on your local machine: project folder, subdirectory app, npm init, installation of dependencies (such as express) and a index.js that sets up a server on port 3030 with a basic message.
  2. Dockerize the app with a Dockerfile that defines the base image (for example node:12), the WORKDIRCopy the app files, run npm install and expose the internal port.
  3. Add a .dockerignore to avoid putting things like node_modules.
  4. Create a docker-compose.yml in the project root, indicating the version (for example, 3.8) and defining the main service, its build, port mapping (3030:3030) and the command (node index).

Once you have the project and the compose readyThe deployment to the remote server usually follows this flow:

  • You connect to the server via SSH.
  • You clone the repository or upload the files (Git, SCP, rsync…).
  • You install docker-compose if it wasn't already there (in many distros you have to install it separately from Docker, for example by downloading the binary from GitHub and giving it execution permissions).
  • You execute docker-compose up (o docker compose up (depending on the version) so that the images are downloaded, your app image is built, and the containers are started.

One point that is often overlooked is the vendor's firewall.If your service listens on port 3030, you'll need to open it in your firewall rules or create a specific policy and associate it with the server. Otherwise, you'll only see a "connection refused" message from the outside, even if the container is running correctly.

Once it's operational, you can access the application using the server's public IP address and exposed port (for example, https://IP_DEL_SERVIDOR:3030), or by hiding that port behind a reverse proxy like Nginx/Traefik that listens on port 80/443.

Managing remote containers with Plesk and Docker

If you use Plesk as your control panelYou can also use its Docker extension to manage containers directly from the web interface, both on the server itself and on remote Docker hosts.

Plesk supports Docker on a good variety of operating systemsCentOS 7, RHEL 7, Debian 10/11/12, various versions of Ubuntu (18.04, 20.04, 22.04, 24.04), AlmaLinux 8/9, Rocky Linux 8.x, and updated Virtuozzo 7. In Plesk for Windows, Docker does not run locally but on a remote machine acting as the Docker host.

There are some important limitations to keep in mind:

  • You cannot use the Docker extension if Plesk is deployed inside a Docker container.
  • To use remote Docker services (i.e., external hosts), you need an additional license or specific packs (Hosting Pack, Power Pack, Developer Pack).
  • Docker containers managed by Plesk are not "migratable" as such, although you can back up the data they use using volumes or snapshots.

From the Plesk interface you can search for images both in the local repository (images already downloaded to the host) and in Docker Hub. To launch a container, the panel guides you:

  1. Ir a Docker > Containers > Run Container.
  2. Find the desired image and review its documentation on Docker Hub (if applicable).
  3. Optionally select a specific image label/version.
  4. Configure container parameters (environment variables, ports, volumes, memory, automatic startup, etc.) and click Run.

Plesk also lets you manage advanced settings For each container: reassign ports (automatic or manual), decide if the port is accessible from the Internet or only from localhost, limit the RAM that the container can consume, define volumes (path on the host and in the container) or add as many environment variables as you need.

Regarding the remote orchestration aspectPlesk can work with “remote Docker services”. This involves configuring the Docker daemon on the remote host (for example, using a /etc/docker/daemon.json with support for TLS and TCP sockets), generate certificates .pem and register that host in Plesk from Docker > Environments > Add ServerThen you can mark that Docker node as active and switch between different servers from the same interface.

Deploy Docker Compose stacks from Plesk

If you already use Docker Compose for your infrastructureYou might be interested in having Plesk handle the deployment of "stacks" from files. docker-compose.yml.

The workflow for deploying a Compose in Plesk It's relatively straightforward:

  1. Walk into Docker > Stacks > Add Stack.
  2. Assign a project name to the stack.
  3. Choose the source of the Compose file: editor (paste the content), upload a file from your computer, or select an existing file in a domain's web space.
  4. Confirm the configuration and allow Plesk to declare and create the defined containers.

Everything that is built during the build process The associated Compose is stored in the website's main directory, which facilitates access to logs, intermediate artifacts, or additional files generated by the build.

Plesk also facilitates the management of local images: since Docker > Images You can filter, review different product tags, see the space used, and delete outdated images to free up disk space. This is important in remote environments with limited space.

If you use Nginx as your front-end web serverPlesk applies proxy rules (for example, in the nginx.conf (from the domain) to route traffic to your Docker containers, even in scenarios behind NAT. This saves you from manually dealing with reverse proxy configurations on remote servers.

Manage remote containers with Portainer

Portainer is a lightweight web interface For Docker, it greatly simplifies the daily work of those who don't want to live on the command line. It functions as a container itself and can manage the local host or multiple remote hosts (even with Portainer Agent).

To install Portainer on your server using Docker You usually follow these basic steps:

  • Create a volume for the Portainer data: docker volume create portainer_data.
  • Launch the Portainer container by mapping ports 8000 and 9000, mounting the Docker socket (/var/run/docker.sock) and the volume of data: docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer.

This will have Portainer listening on port 9000 of the server.As always, you'll need to open the port in your firewall or expose it through an HTTPS reverse proxy. The first time you access it through a browser, Portainer will ask you to create an administrator user, and then you can choose whether to manage the local Docker or additional remote hosts.

The Portainer panel is quite intuitive.You'll see active containers, their logs, resource consumption statistics, Compose stacks, networks, volumes, and more. And, most importantly, it lets you recreate containers with different parameters, update images, manage stacks, and centralize multiple remote servers from a single interface.

Run graphical applications in remote containers and access them via a browser

When your computer runs out of resources But if you need to use heavy graphical applications (such as email clients, IDEs, or reverse engineering tools), a very interesting solution is to run them in Docker containers on a powerful server and access them via the web.

A very well-documented case study The idea is to encapsulate Mozilla Thunderbird in a container, expose its graphical interface through TigerVNC/noVNC, and secure access with Caddy. This concept can be reused for almost any Linux GUI application.

The basic architecture of this type of graphical container usually includes:

  • A lightweight VNC/X11 server (TigerVNC) that acts as a display server.
  • A minimalist window manager (OpenBox) for handling windows.
  • A small, easy-to-use server type easy-novnc which exposes VNC as a WebSocket and generates an HTML page to connect from the browser.
  • supervisord or similar to start and monitor all processes within the container.
  • The application itself (Thunderbird, GIMP, etc.) configured to be displayed on the remote display (DISPLAY=:0).

In practice, a working directory is set up. (for example, ~/thunderbird) where they are placed:

  • Un supervisor.conf which defines the programs to launch: TigerVNC, easy-novnc, OpenBox and the main application, with priorities so that the graphical server starts before the app.
  • Un menu.xml OpenBox configures the desktop menu (main application, terminal, process monitor with htop, etc.).
  • Un multi-stage Dockerfile which in the first stage compiles easy-novnc with Go, and in the second step it creates the final image on Debian, installing openbox, tigervnc, supervisor, console utilities, the application (Thunderbird in the example), copying the binaries and configurations, creating a non-root user and defining a persistent data volume in /data.

The container's default command is usually delegated to supervisord, starting it up like a normal user using gosuAfter adjusting permissions on the data volume, when the container starts, VNC, noVNC, the window manager, and the application are automatically launched, and you only need to access the HTTP port exposed by easy-novnc.

To make it more robust and user-friendly for the internetIt's a good idea to put a web server like Caddy in front, also in a container, which acts as a reverse proxy to your graphical app, adds basic authentication (hashed username and password), and optionally exposes a WebDAV to access the files. /data from your local machine.

Orchestrate the solution with networks, volumes, and Caddy

To keep this type of deployment organized The usual approach is to create your own Docker network and one or more data volumes:

  • Network, for example thunderbird-net, which will be shared by all related containers.
  • Volume thunderbird-data which will contain the user profile and persistent data of the graphical app.

The container of the graphical application You can launch it with something like:

  • Politics --restart=always so that it can stand up on its own.
  • Volume assembly thunderbird-data:/data.
  • Connection to the network thunderbird-net.
  • Identifiable name (thunderbird-app(for example) that Caddy will use for the reverse proxy.

In another directory (for example, ~/caddy) the image of Caddy is constructed with the necessary modules (such as the WebDAV plugin) and a Caddyfile which defines:

  • A server on port 8080.
  • Un reverse_proxy to thunderbird-app:8080 (or the port that noVNC exposes).
  • Additional paths for navigating through files (/files) and for WebDAV (/webdav), both serving the content of the data volume.
  • A block of basicauth which protects all paths with a user and a hashed password read from environment variables.

When creating the Caddy containerthe same volume is assembled thunderbird-data:/data, it connects to the network thunderbird-net and its port is published (for example, -p 8080:8080) on the host. With this, you just need to go in your browser to http://IP_DEL_SERVIDOR:8080Enter your credentials and click connect to start using the graphical application remotely.

Long-term maintenance is simpleWhen you need to update, you can stop and delete the containers, rebuild the images with the new versions, and relaunch them. docker run maintaining the data volume, so that the user's settings and files remain intact.

With all these pieces (Docker Compose, Plesk, Portainer, VS Code, WSL 2, Caddy, noVNC…) It is possible to set up everything from simple backend deployments to remote desktops encapsulated in containers and perfectly accessible via browser, taking advantage of servers with much more power than your machine and maintaining quite fine control over networks, security, storage and updates.