Rclone Setup with Google Service accounts Using Sa-gen and Sasync Print

  • 7

Since Google doesn't provide Linux client for Google drive, we need to use rclone to mount and use Google drive. Rclone can also mount Google drive on Windows. Rclone has many advanced features such as API rate limit, server side copy, etc.

Rclone Install

The most stable version of rclone for Google drive is 1.52.3, which has working algorithm to reduce API quota errors before it was overwritten. However it doesn't have latest features such as write back timeout and network mode.



Rclone is a self-contained static binary so no dependency libraries are needed, you may either just extract the binary and run, or install using package.

If you are new to Linux, it's easier if you run "sudo su -" to become root user first before running any commands.


yum -y install fuse

yum -y install ./rclone-v1.52.3-linux-amd64.rpm


apt install ./rclone-v1.52.3-linux-amd64.deb

Rclone config

run rclone config

Choose Google drive, name it say teamdrive1, accept all defaults (will config later with config file), At remote config section, choose no for auto config (unless you have GUI desktop).

After config and quit, try to mount with "rclone mount teamdrive1: /mnt &", assuming teamdrive1 is the name. Do "df" and you should see the drive, if not need to do it again. To stop the mount, kill the job using "kill %1", if you have multiple background jobs, run "jobs" first.

If successful, create a new mount point with "mkdir /mnt/teamdrive1"

Run rclone config again, add an encrypt drive pointing to Google drive and add a cache drive pointing to encrypt drive. In rclone doc it's suggested to add cache drive first but during our testing our setup is good, it's up to you.

Edit the /root/.config/rclone/rclone.conf and use the optimized parameters below.

type = drive
client_id = <snip>
client_secret = <snip>
scope = drive
upload_cutoff = 32M
chunk_size = 32M
pacer_burst = 1
server_side_across_configs = true
#token = <snip>
team_drive = <snip>
dir_cache_time = 8760h

type = crypt
remote = teamdrive1:
filename_encryption = standard
directory_name_encryption = true
password = <snip>

type = cache
remote = teamdrive1e:
rps = 8
workers = 10
chunk_total_size = 2G
info_age = 8760h


Setup system service:

vi /etc/systemd/system/rclone.service

# /etc/systemd/system/rclone.service
Description=Rclone VFS Mount

#ExecStartPre=/bin/sleep 10
ExecStart=/usr/bin/rclone mount \
--allow-other \
--config=/root/.config/rclone/rclone.conf \
--vfs-cache-mode writes \
--vfs-cache-max-age 1m \
--dir-cache-time 8640h \
--rc \
--transfers 100 \
--use-mmap \
--syslog \
--no-traverse \
--tpslimit 8 \
cache1: /mnt/teamdrive1
ExecStop=/bin/fusermount -uz /mnt/teamdrive1



Start the mount:

systemctl enable rclone

systemctl start rclone

If you encounter errors, please check /var/log/messages or /var/log/sylog (Ubuntu) and try again.

By default rclone uses /tmp/rclone for cache, if you don't have enough space in /tmp, you can setup symbolic link or add path option in config to another directory.



For Windows you also need to download and install winfsp and nssm:



Install Winfsp using Installer, copy rclone and nssm to  c:\windows\system32\

Follow the same process above to configure rclone or you may copy the previous linux config if it's already done.

Try mount on command line "rclone mount cache1: z:", if you see the drive in explore you may press ctrl-c to stop the mount.

install service by running "nssm install rclone"

For path put c:\windows\systeme32\rclone.exe, for Arguments put below:

mount cache1: z: --dir-cache-time 8760h --rc --transfers=100 --allow-other --vfs-cache-mode writes --use-mmap --no-traverse --tpslimit 8 --config c:\users\User\.config\rclone\rclone.conf

Replace "User" if it's not your login id.

Start rclone and you should see drive z:

nssm start rclone

If not working, add log option and log to a file and check for errors.


Service accounts with SA-gen

Warning: Google limits 750GB upload per day to prevent abuse, please respect that. Using service accounts may have your Google account banned. This is for educational purpose only. If you decide to continue, please don't use your primary Google account, and we take no responsibilities.

Go to https://groups.google.com and create a new group, say [email protected].

Go to https://cloud.google.com/sdk/docs/install-sdk and follow the instructions to install gcloud sdk.

run "gcloud init" to authenticate your google account.

Assuming you install sa-gen in /opt/sa-gen, open /opt/sa-gen/sa-gen and update the parameters:

GROUP_NAME="[email protected]"


Note: project base name must be unique across Google.com, most errors are due to the name is taken. Short is fine but must be unique. You can create more than 100 accounts but you have to add them one by one to Google group later, unless you want to use your own Workspace groups to batch add which we don't cover.

run "./sa-gen", you should see files created in /opt/sa.

Add service accounts to Google group

open allmembers.csv file, the second column contains all the service accounts, if you own a workspace you may import this file to Google groups but we prefer to use free Google groups (Google cannot ban Google domain). Add the service accounts one by one to your Google groups, it's a tedious process but only need to do it once.

Submit a ticket to us to add this Google group as member of your shared drive. Once done all service accounts instantly have access to the shared drive.

Configre rclone with service account and setup rotation

Now we have service accounts configured we can use service account instead of client secrets or tokens. Open rclone.conf and remove client secrets and only use service account:

type = drive
#client_id = <snip>
#client_secret = <snip>
scope = drive
upload_cutoff = 32M
chunk_size = 32M
pacer_burst = 1
server_side_across_configs = true
service_account_file = /opt/sa/1.json
team_drive = <snip>
dir_cache_time = 8760h


Restart rclone and it should come up fine.

Now we would like to rotate service accounts every hour using rclone remote. Run "crontab -e" and add the below schedule and save:

0 * * * * /usr/bin/rclone rc rc/noop service_account_file=/opt/sa/`date +%H`.json &>/dev/null


It will rotate service account json file from 1 to 24 corresponding to each hour, your activities will not be interrupted as only new API calls will be using new service accounts.


On Windows it's similar. create an hourly (and at startup) task to switch service account with below parameters:

rc rc/noop service_account_file="c:\opt\sa\%time:~0,2%.json"


Please remember when doing other ad hoc tasks such as batch copy/migration, you should exclude 1-24.json files, these are exclusive for mounting. We will show you how.

Use Sasync for batch copy/migration between shared drives

sasync will automate switching service account when doing batch copy. First download and install sasync in opt.


open sasync.conf and edit the following:


The only important parameter here is MINJS, telling sasync start using service accounts from 25th. Rest you just need to verify if they are correct.

Open sets/set.file and put in what you like to copy and save. i.e.

copy   teamdrive1: teamdrive2:


Start "tmux" session because this can take a while. Run "./sasync set.file" to start the copying process. To get out of tmux, type ctrl-b d. to get into tmux, type "tmux attach". It's safe to close ssh window without quitting tmux (you are supposed to do that actually), it will run in the background. The copy is done on Google datacenter itself with 10Gbps+ connection, no local bandwidth is used.

If you add another Google drive and would like to use rotation, you would need to use another rc port instead of 5572, in service config add "--rc-addr localhost:5573" and in your cronjob add "--url http://localhost:5573".





Was this answer helpful?

« Back