Rclone has become one of the most comprehensive tools for cloud storageA single command-line tool capable of communicating with dozens of different services, copying data between them, mounting cloud services as if they were local drives… And doing it all with encryption and advanced automation. If you manage servers, use multiple cloud services, or simply want to have robust backups, rclone is the kind of utility that will change the way you work.
En this guide You'll learn how to install rclone on major systems, configure remotes for services like Google Drive, OneDrive, S3, or B2, understand the differences with rsync, set up clouds with FUSE, encrypt data, automate backups with cron or systemd, and solve typical performance, authentication, or API limit problems.
What is rclone and how does it differ from rsync?
Rclone is a Open-source command-line program designed to manage files in cloud storageIt supports over 70 providers: Google Drive, Google Photos, OneDrive (personal, business and SharePoint), Dropbox, Box, MEGA, pCloud, Proton Drive, S3 services (AWS, Wasabi, Cloudflare R2, Backblaze B2…), Google Cloud Storage, Azure Blob, WebDAV (Nextcloud, ownCloud), SFTP/FTP, SMB/CIFS, HTTP and many more.
At a conceptual level, rclone It extends the idea of rsync to the cloud world. It synchronizes directories, copies data, and performs unidirectional or bidirectional mirroring, but it also understands cloud APIs, retries, bandwidth limits, caches, and backend-specific metadata. While rsync focuses on local paths or SSH, rclone speaks the language of each provider's APIs.
The key practical difference is the focus. Rsync works well in local or SSH environments, rclone is optimized for cloudsIt knows when to take advantage of server-side copy (copying directly between buckets without going through your machine), how to split very large files into chunks, or what to do with metadata such as Content-Type, permissions, or versions.
With options like --multi-thread-streams or parallel transfers, rclone can easily outperform rsync by 4x when copying over a network. Especially with backends that support chunked uploads (S3, GCS, B2, etc.). It also offers transparent encryption, FUSE mounting, multi-remote junction layers, and a small integrated HTTP/WebDAV/FTP server.

Compatible services and internal architecture of rclone
El supplier support This is one of rclone's strengths. In practical terms, you can define as many "remotes" as you want: each remote describes a connection (for example, gdrive: for personal Google Drive, onedrive: for OneDrive for Business, s3-backup: for an S3 bucket, nextcloud: via WebDAV, etc.).
For end users, rclone easily covers the most common cloud services.Google Drive/Photos, OneDrive (including SharePoint), Dropbox, Box, MEGA, pCloud, Proton Drive, and other privacy-focused services. This allows you to centralize tasks into a single command that previously required multiple apps or official clients.
In enterprise and development environments, rclone dominates the entire S3 world and similar systems.Amazon S3 Standard, Google Cloud Storage, Azure Blob Storage, Backblaze B2, Wasabi, Cloudflare R2, and a good number of compatible providers (MinIO, Ceph, etc.). They are all managed with the same basic syntax, only changing the remote.
As for the self-hosted protocols and systemsrclone supports SFTP, FTP, WebDAV, SMB/CIFS, and even HTTP. This means you can use it to copy from an SFTP server to an S3 bucket, transfer data from Nextcloud to a local folder, or bulk download from a web server without needing any additional tools.
Internally, rclone is organized into several layers: a core that orchestrates operations (Rclone Core), a VFS layer used in cached mounts, a Crypt layer that encrypts/decrypts on the fly, and a Chunker layer that chunks large files in backends that require it. Underlying all of this is a common backend abstraction that hides the specifics of each vendor.
System requirements and installation on Windows, Linux, and macOS
Rclone is very lightweight, but it's important to understand the basic requirements. It works with 512 MB of RAM, although for intensive use (mounts with caching, many simultaneous transfers) 2 GB or more is advisable. At the CPU level, 1 vCPU is sufficient, but a couple of cores helps to take advantage of parallel transfers. On disk, it boots with 100 MB of free space, but if you're going to use VFS caching, it's recommended to reserve at least 1 GB.
On Linux, a modern kernel is recommended (ideally 5.4+ with FUSE3)Especially if you're going to mount remotes as file systems. Regarding distributions, rclone works on virtually any current flavor (Ubuntu, Debian, Fedora, etc.) as long as you have curl or wget and sudo privileges.
Detailed installation on Windows
On Windows you have three main ways to install rcloneFrom the most controlled to the most automatic. The essential thing is that you end up with a rclone.exe accessible from any console (CMD or PowerShell).
A) Download the manual from the official website (recommended if you want to be clear about what you are installing):
- Download the ZIP file for your architecture.: For example
rclone-v1.xx.x-windows-amd64.zipfor 64 bits. - Unzip the file into a fixed folder, for example
C:\rclonewhere it will remainrclone.exeand several text files. - Add
C:\rcloneto the system PATH (Control Panel → System → Advanced settings → Environment Variables → Edit PATH → New →C:\rclone).
B) Installation with Winget on Windows 10/11Perfect if you already use the Microsoft package manager:
- Install rclone:
winget install Rclone.Rclone - Uninstall if necessary:
winget uninstall Rclone.Rclone --force
C) Chocolatey for those who have automated system software:
- Install rclone:
choco install rclone - If you want to mount drives, also install WinFsp:
choco install winfsp
Installation on Ubuntu/Debian and other Linux systems
On Linux, the simplest and always up-to-date method is the official scriptDownload and install the latest stable version (or beta) with a single command:
- Stable version:
sudo -v ; curl https://rclone.org/install.sh | sudo bash - Beta version:
sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta
If you prefer absolute control, you can download the package .deb concrete and manage it with dpkg:
- Download:
wget https://downloads.rclone.org/v1.xx.x/rclone-v1.xx.x-linux-amd64.deb - Install:
sudo dpkg -i rclone-v1.xx.x-linux-amd64.deb - If dependencies are missing:
sudo apt -f install
For FUSE assemblies it is important to install fuse3 and allow allow_other en /etc/fuse.confuncommenting the corresponding line. After that, a restart or service reload is usually sufficient.
Installation on macOS
On macOS, Homebrew makes life much easier.With a couple of commands, you have rclone ready to use:
- Install with Brew:
brew install rclone - Update regularly with
brew upgrade rclone
If you don't want to rely on Homebrew, you can opt for a manual installation. Downloading the macOS ZIP file, extracting it, and moving the binary to /usr/local/bin just like you would in Linux. The subsequent usage is identical: rclone version to check that everything is okay.

First steps: basic setup and the concept of “remote”
The heart of rclone is the configuration filewhere you define your remotes. By default, it resides in ~/.config/rclone/rclone.conf (Linux/macOS) or in the path indicated by %APPDATA% on Windows, and is managed with the interactive wizard rclone config.
Each remote is an INI section that groups connection parameters: backend type, credentials, region, special options, etc. A typical example for Google Drive would be something like [gdrive] with its type = drive, scope = drive and the OAuth token stored in JSON.
To start the wizard, Open a terminal and run rclone configYou'll see a menu with several options: create a new remote, edit an existing one, delete it, rename it, encrypt the configuration, etc. The usual thing to do at first is to press n for “New remote”.
In the case of Google DriveThe typical remote creation flow includes several steps: choosing the “drive” type, deciding whether to use your own client ID (recommended to avoid strict Google limits), choosing the scope (drive full, read-only, etc.), use automatic authentication in a browser and, optionally, indicate if it is a shared drive.
If you are in a server without a browser (SSH, VPS, container without a GUI), rclone allows authorization from another machine. When the wizard asks "Use auto config?", answer no, and rclone will display a command. rclone authorize "drive" which you must run on your PC with a browser, copy the resulting token and paste it into the server in the corresponding field.
Basic syntax, remote routes, and fundamental commands
The general syntax of rclone is very consistent and easy to memorize: rclone [opciones] subcomando origen [destino]The subcommand can be copy, sync, ls, mount, moveetc. The origin and destination are local or remote routes of the style remote:path/to/dir.
A route /path/to/dir points to the local file systemWhile remote:path/to/dir This refers to a directory within a remote defined in the configuration. In most backends, remote:/path/to/dir It is equivalent to the same thing, except in a few peculiar cases (FTP, SFTP, Dropbox Business) where the prefix / changes the meaning (root directory vs. home).
The most useful listing commands to get started with are ls, lsl, lsd y treeFor example, to view the files in a Google Drive folder of size: rclone ls gdrive:DocumentosTo list only directories: rclone lsd gdrive:If you want a more visual tree: rclone tree gdrive:Proyectos --level 3.
Copying files is as simple as rclone copy origen destino. Practical examples:
- Local → cloud:
rclone copy C:\Users\usuario\Documents onedrive:backup/documents -P - Cloud → local:
rclone copy onedrive:photos C:\Users\usuario\Pictures -P - Cloud → cloud (server-side when possible):
rclone copy gdrive:data onedrive:backup -P
The subcommand sync makes the destination identical to the originIt erases from destiny what no longer exists at the origin. It's quite dangerous if you don't use it wisely, so at first, always accompany it with... --dry-run and, if you want, --interactive to request confirmation before destructive operations.
For bidirectional synchronization there is rclone bisyncwhich is still experimentalIt tracks changes on both sides to keep them aligned, which is useful in certain offline work scenarios, but it's best to test it thoroughly with non-critical data before entrusting it with your digital life.
Mount cloud drives as local drives with FUSE and VFS cache
One of rclone's star features is the ability to mount a remote as if it were a hard driveThis allows you to browse the cloud from the file explorer, edit documents directly, or point applications (media indexers, editors, etc.) to remote paths without them knowing there's a cloud behind it.
In Windows, mounting is done by assigning a drive letter or creating a network drive.. For example: uterine
- Mount OneDrive as drive X:
rclone mount onedrive: X: --vfs-cache-mode full - Mount as network drive:
rclone mount onedrive: X: --network-mode --vfs-cache-mode full
In Linux and macOS, a mount point is used in the file system., usually with FUSE:
- Create directory:
mkdir -p ~/OneDrive - Mount in background (daemon):
rclone mount onedrive: ~/OneDrive --vfs-cache-mode full --daemon
The key parameter here is --vfs-cache-mode, which controls the behavior of the cache:
off: no cache, maximum read performance, but some apps don't play well with this.minimal: minimum cache required for basic write functions to work.writes: caches writes and uploads them later, useful if you edit files but don't need aggressive cache reading.full: Full read and write caching, recommended for mounts that will be used as if they were real disks (multimedia, IDEs, etc.).
For streaming services or media catalogs (Plex, Jellyfin, etc.), it is usually used --vfs-cache-mode full plus a good cache size (--vfs-cache-max-size, --buffer-size) and a generous retention time (--vfs-cache-max-age, --dir-cache-time), so that the server doesn't have to be constantly generating lists.
Graphical interface: Rclone Web GUI, Rclone UI and RcloneBrowser
Although rclone was born as a purely CLI tool, today There are several ways to use it with a graphical interfaceThis is perfect if you're going to delegate tasks to someone who isn't comfortable with the terminal, or you simply want a more visual view of the transfers.
The tool itself includes an experimental Web GUI., which is launched with:
rclone rcd --rc-web-gui --rc-user=admin --rc-pass=password- Then you point the browser to
http://localhost:5572and you log in with that username and password.
Furthermore, There are very polished third-party GUIsOne of the most comprehensive options is Rclone UI, a desktop app for Windows, macOS, and Linux that supports drag and drop, task scheduling, multiple concurrent transfers, and visual progress bars. Another long-standing alternative is Rclone Browser, available even as an AppImage on Linux, which is sufficient for many users who only need to manage occasional backups.
If you're one of those people who lives on the console but doesn't want so much hassle on your mobile, There are also several apps on Android that integrate rclonemany of them directly reusing the file rclone.conf that you generate on the PC. Simply copy that configuration file to the path indicated by the app and you'll have your remotes ready on your mobile device as well.
Transparent encryption with crypt and configuration security
One of the great attractions of rclone is to be able to encrypt your data before it leaves your machine. The backend crypt It acts as a layer on top of another remote one: you see normal file names, but encrypted names and content are stored in the cloud.
The typical configuration of a remote encryption involves create a new remote of type crypt and point it to a route from another remote, for example: remote = gdrive:encryptedAdditionally, you choose the name encryption mode (standard, obfuscate or off) and define a password (and optionally a second "salt" to strengthen the encryption).
Once created, Operating with remote encryption is completely transparentIf you do rclone copy /datos/sensibles gdrive-crypt:In Google Drive, you'll only see strange names and unreadable content. From the encrypted remote, however, your paths and files will appear exactly as they are.
The rclone configuration file can and should be protected when it contains sensitive credentialsrclone itself allows you to encrypt that file: in the menu of rclone config You choose the option to set a configuration password, enter a key, and from then on, the program will ask for that password to read the rclone.conf.
In automated environments, you can supply the password using the environment variable.RCLONE_CONFIG_PASS or with --password-commandso that scripts, cron, or systemd services can use rclone without manual intervention but without leaving the password visible in plain text.
Automating backups and scheduled tasks
Where rclone truly shines is in recurring backups and scheduled synchronizations.You can use both the native schedulers of each system (Task Scheduler in Windows, cron in Linux, systemd timers) and custom scripts that include notifications and cleanup of old versions.
In Windows, the Task Scheduler allows you to launch rclone at specific times with specific parameters.For example, to sync a critical folder with OneDrive every night. You can dump the output to a log file and enable retries if the task fails.
In Linux, the most common practice is to prepare a small backup script and hook it into cron.. For example, a rclone sync /datos/ gdrive-crypt:backups/ daily at 2:00, with --log-file, --fast-list and filters to exclude temporary files or large logs.
If you want to go one step further, you can Combine rclone with systemd to mount remotes at boot or launch backup scripts as services and timersThis provides much greater visibility (logs integrated into journals, network dependency control, automatic restarts if they fail, etc.) and is usually preferable to cron on modern systems.
The great thing is that rclone offers flags designed for serious backups: --backup-dir y --suffix to send older versions to a history folder, --checksum to compare with hashes when the backend supports it, --max-transfer y --bwlimit to avoid saturating bandwidth or exceeding daily quotas, or --track-renames to detect renamed files instead of deleting and re-uploading them.
Performance optimization, advanced filters, and troubleshooting
When you start moving many gigabytes or millions of files, the details make all the difference.Rclone exposes an arsenal of performance options: --transfers to adjust the number of parallel rises/falls, --checkers for the checks, --multi-thread-streams y --multi-thread-cutoff for multi-threaded uploads of large files, --buffer-size to define the size of the buffers in RAM, etc.
For collections with many small files, it's usually a good idea to increase --transfers y --checkers, and add --fast-list in backends that support efficient recursive listings. Yes, --fast-list It consumes more memory because rclone saves the entire list beforehand, so it's advisable to measure and not overuse it on machines with limited RAM.
Bandwidth limitation is controlled with --bwlimitwhich even supports timetablesSomething like that --bwlimit "08:00,1M 18:00,off" It allows for smooth operation during office hours and unleashes high speed at night. Using this option helps prevent the backup from crashing the entire office's internet connection.
Filters are another fundamental pillar: with --include, --exclude, --filter-from, --min-size, --max-ageetc., you can specify exactly what is copied and what is not.A well-designed filter file saves you hours of unnecessary transfer (for example, by excluding node_modules, .git, caches, giant logs, etc.).
As for typical problems, you'll mostly see authentication errors or API limits in Google Drive and similar services.In those cases, it's advisable to reconnect the remote with rclone config reconnectConsider using your own Client ID in the Google console and reducing parallelism and TPS (--tpslimit) if you are hitting rate limits.
When things really go wrong, rclone's debug mode and header dumps are a huge help.: launch the command with -vv --dump headers or even --dump bodies (Carefully, because it's very verbose) It usually reveals what the backend is returning and why. And if you suspect a bug, take a screenshot with -vv And opening an issue in the project's GitHub repository is the fastest way to get help.
With all of the above, rclone becomes a central piece for anyone who relies on cloud storage on a daily basis.Whether you use it to have encrypted backups across multiple providers, set up Google Drive on a media server, migrate data between S3 buckets, automate database backups, or simply prevent your Raspberry Pi's hard drive from filling up, once you get the hang of the syntax and remote access, it becomes the tool you always return to when you think, "This can definitely be done with rclone."
