How about using rclone for that? You are already in r/rclone so just use it like this:
For a general tutorial you can find a ton on the Internet and the docs on https://rclone.org are also just fine.
Or as user friendly alternative "duplicati"
The number changes based on the backlog as rclone does not get all the information up front:
--max-backlog=N
This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.
Setting this large allows rclone to calculate how many files are pending more accurately and give a more accurate estimated finish time.
Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.
https://rclone.org/docs/#max-backlog-n
So it's working exactly as expected.
You can do this right inside rclone. Check out Union Remote. It's level of control is inspired by MergerFS but offers slightly different options.
You can tell it priorities, etc. You can even do things like make things write to local and then separately sync to the remote (like the VFS cache does but more explicit). There are all kinds of fun stuff you can do
You can use filtering to include or exclude folders. So you would set the source to be the parent to all the folders you want to sync and then use filtering so rclone only acts on the correct folders.
Google drive has a 750 GB daily upload limit. I think that is what you're running into. Just wait a day and you'll be able to upload again.
You are using the wrong flags. It is confusing but all --cache
flags with a few exceptions are for the deprecated cache remote. It is all in --vfs-cache-mode
type stuff.
Remove:
--cache-tmp-upload-path=/home/me/my_mounts/tmp_cache/rclone/upload \ --cache-chunk-path=/home/me/my_mounts/tmp_cache/rclone/chunks \ --cache-workers=8 \ --cache-writes \ ... --cache-db-path=/home/me/my_mounts/tmp_cache/rclone/db \
Using --cache-dir
is fine but you need to add --vfs-cache-mode full
and probably many other flags. I strongly suggest reading the entire mount docs very carefully but especially on VFS caching
Also can remove
--fast-list \ --drive-use-trash \ --checkers=16 \
As the former does nothing on mounts (but is otherwise good for bucket remotes) and the the drive because you aren't using Google Drive. (so you don't need any --drive-<...>
flags. --checkers
aren't used on mount either
Is there a reason for --bwlimit=40M
? That is pretty low, right?
Start with the absolute simplest mount possible plus cache and --no-modtime
:
rclone mount --vfs-cache-mode full --cache-dir "<dir>" --no-modtime remote: mountpoint/
And then only add flags as needed. BTW< --no-modtime
is good to have if you don't care about it. It'll save a lot of work.
You don't need to setup a remote for rclone to access local files. https://rclone.org/local/
So all you need to do is setup a crypt remote for your destination to encrypt stuff.
Then do something like this: rclone copy D:/folder/ crypt:
Hello there
With a few hours on my hands, I came back to this...
So I tried using the chmod +x /usr/bin/local/rclone
command.
I get the response: No such file or directory
I am now doubting if this Rclone was installed properly, if at all...
I used the Script installation: curl https://rclone.org/install.sh | sudo bash
I think so.
Normally, when you create a client id you basically give rclone totally access.
https://rclone.org/onedrive/#getting-your-own-client-id-and-key :
5. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read. Once selected click Add permissions at the bottom.
But you can set more limited permissions: https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/permissions_reference
Files.ReadWrite.AppFolder
might be what you want.
> Still learning this stuff.
Seriously, the way you "learn this stuff" is to read the docs for the commands you use. You don't have to memorize it. You don't even have to read every word. But you should at least have an idea of what is there so that when you do need a feature, you have some idea that it is there.
I mean, it is a section header! Please, read all of the docs for mount and your remote.
Also, is there a reason you are using --allow-non-empty
? That really shouldn't be needed.
And same for -L
. By context, TS3:
is a Gdrive which doesn't support links. So why have a -L, --copy-links
flag?
Along the same idea of RTFM, also make sure you understand what flags you use and in general, use as few as needed
I am 99% sure this is by design without a way to disable it. Maybe setting --bw-limit? See also this forum post
It has to buffer to your local disk and then upload. It won’t appear immediately.
You can try https://rclone.org/commands/rclone_copyurl/
But it won’t do exactly what you’re calling wget to do.
Try running this in the cloud instead for more bandwidth...
Gald that it worked, it took a while to create.
My drive name is MERLIN:
MY crypt remote is MERLIN:BACKUP
so how do I run rclone commands? I have been here but still will need some guidance https://rclone.org/crypt/
Don't understand what flags do I use
Rclone is written in go and can be downloaded as a binary. As long as you have execute permissions on the binary you should be good to go. I imagine you can brew install it too.
https://rclone.org/install/#macos-installation-from-precompiled-binary
I don't use rclone browser as it's a bit dated and I'm not on Windows.
You can run rclone config and create a crypt remote.
https://rclone.org/crypt/#i-class-fa-fa-lock-i-crypt
A crypt remote is just a folder on your existing remote that will contain everything encrypted.
rclone config file
shows the location of the file and you can copy that somewhere or back it up to a location as that contains your password. Remember though, anyone can get access if they comprise your rclone.conf file so be sure to keep it safe.
You can filter files using regex, for full documentation read Filtering.
You want to use "(2[5-9]|3[0-9]|4[0-2])$" in this case.
I believe the command would look like:
rclone copy --include (2[5-9]|3[0-9]|4[0-2])$ (from) (destination)
(The command above is not tested.)
Okay, here's the difference in copy vs sync for the situation that you are asking re: missing files.
​
Copy
If files on your Source are missing (regardless of what's on your Destination), only new files will copy. If you have the same file on both the Source and the Destination, then it will check the time and size of the file as a poor man's integrity check (which is a lot faster than most any checksum due to lack of server-side hashing like what you'd find in `rsync`.)
​
Sync
If files on your Source are missing, BUT files on your destination are there? THEY WILL BE DELETED FROM THE DESTINATION.
​
For the needs that you're describing, I'm most certain that you don't want to use Sync, and would rather use Copy.
​
Here's something to get you started, and furthermore, try `rclone --help` and be certain to read the excellent documentation at https://rclone.org/docs/
​
rclone \
--verbose=1 \
copy \
SOURCE:PATH1 \
DESTINATION:PATH2
​
Then I don't know what's wrong. It should work as long as vv.expansion is the end of the filename.
You may have already seen this but just incase here's the filter rules for rclone https://rclone.org/filtering/
When you run rclone as a normal user (without sudo), rclone will look for the config in /home/<yourusername>/.config/rclone/rclone.conf.
When you run rclone as root (with sudo) it looks for the config file under /root/.config/rclone/rclone.conf.
One way to tackle this is, is by manually specifying the config file location:
sudo rclone mount remote: /mnt/remote --config /home/<yourusername>/.config/rclone/rclone.conf
What you are describing is done by MOVE command (see the docs).
But, I urge you to use the SYNC command and delete the local materials only after you check the transfers are done without any problems. Errors are not uncommon, and you cannot re-do it if you use MOVE command and the locals are deleted as transferred but the whole transfer is not completed successfully.
Use --dry-run option.
Read the synopsis here. Sounds like you want to use rclone copy (ignores duplicates between source/destination).
It will check what's already there and only do what's missing.
tl;dr, use --dry-run
rclone works as a simple get / put / delete, but I don't think it'll monitor local directories for changes -- could be wrong, hadn't looked into it. (The --help option seems to mimic rsync, which by itself doesn't monitor things either. It will optionally skip already-copied file, just like rsync.)
It might be possible to use one of the Android automation tools to do something, but I think you'll end up with a homegrown solution. There's a million apps to do that, but I think you'll need an extension to rsync to manage it.
Try using rclone in combination with restic or borgbackup. They can keep the old versions for you while only saving changed files again and deduplicating your Backups.
> it appears the files go to a cache somewhere on my PC before going to the remote.
With a VFS caching mode enabled, yes that's true. You can configure where and for how long with various flags as documented online here. If you want files added to a rclone mount to be uploaded directly to the remote instead of cached locally just set --vfs-cache-mode=off
. But that has lots of other downsides.
If you really just need to upload a few specific files and nothing else using the CLI or the equivalent rclone browser seems cleaner than an rclone mount, which you're already doing. Cheers!
Alternatively, you can also use rclone check. It even has a flag to output all the differences between the remotes into a text file. I've used this setup before to selectively copy files to a remote that already had a subset of files present.
First of all, if you are using rclone as a backup, use --backup-dir
.
But anyway, your
rclone copy ~ upf_drive:backups/mavin/home/$DATE
plan will make a fully copy every time! Is that what you want? If so, you can actually make this easier by using --copy-dest
(docs). This way you don't need to transfer it every time. Or you can use --compare-dest
to transfer the new files to their own dir.
personally, I prefer --backup-dir
since it means the main dir is a clone of my source but I have everything saved backwards.
That'll create tons of duplicates of your data. That's not going to be a good setup.
Instead, try using backup directory.
So you end up copying files to the same directory, with any changes (removals) going to the dated folder. Anything that hasn't changed doesn't get backed up.
You could also try Kopia or Restic instead of rclone for backups.
So, your config (where you also should have redacted the refresh token but oh well) shows the remote name is onedrive:
and not "one drive":
.
Start with the simplest! Ensure ~/OneDrive
is empty then try:
$ rclone mount onedrive: ~/OneDrive -vv
Those logs will be super, super, super helpful if that doesn't work.
Also:
> 4. Latest version 10.50.2
Do you mean 1.50.2? If so, that is 7 versions and many years old. The fact that you call it "latest version" doesn't make sense. How did you install it? Follow https://rclone.org/install/ and see what happens.
If that is not what you meant, please post the full, unredacted result of
$ rclone version
(there is nothing sensitive there)
Like /u/CaponeFroyo said, you can use . for current directory.
But I’ll also add that you can create a local remote that specifies a location, https://rclone.org/local/
I use this to make some things easier regardless of the directory I’m at.
Yea I know what you mean about the data loss - I've been having good luck lately, so I'm gonna try to get all of my options set, then maybe go back and troubleshoot what could've gone wrong with the previous mounts. I think you're on to something though with it being related to caching to /tmp and how that's handled.
I was adding the uid/gid options because the drive had mounted as root, but now I have my user called out in the systemd service which is working well. Here's where I've ended up at the moment. I'm still getting an error about the poll-interval not being supported for the awscrypt remote, but I assume that's set by default and not a real issue.
Yeeah, that's it; the latest version of Mint is 20.2, while 20.3 is supposed to be out by Christmas.
Mint says: "Long term support release (LTS), supported until April 2023", but I think all LTS is going to get you is known bug fixes for existing packages and not anything new. Still, it's supported, and getting bug-fixes for stuff is nothing to sneeze about.
(See Log4Java news for a problem and it's bug fix -- something which has nothing to do with this.)
u/quantum_libet, can you tell me where to select that?
I use https://rclone.org/onedrive/
From that page:
<<CUT>>
Choose a number from below, or type in an existing value
1 / OneDrive Personal or Business
\ "onedrive"
2 / Sharepoint site
\ "sharepoint"
3 / Type in driveID
\ "driveid"
4 / Type in SiteID
\ "siteid"
5 / Search a Sharepoint site
\ "search"
Your choice> 1
Found 1 drives, please select the one you want to use:
0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
Chose drive to use:> 0
<<CUT>>
However, it doesn't give me the option to select a drive, but chooses my Personal account. How do I force the choice for Business?
Ignasz
u/quantum_libet, can you tell me where to select that?
I use https://rclone.org/onedrive/
From that page:
<<CUT>>
Choose a number from below, or type in an existing value
1 / OneDrive Personal or Business
\ "onedrive"
2 / Sharepoint site
\ "sharepoint"
3 / Type in driveID
\ "driveid"
4 / Type in SiteID
\ "siteid"
5 / Search a Sharepoint site
\ "search"
Your choice> 1
Found 1 drives, please select the one you want to use:
0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
Chose drive to use:> 0
<<CUT>>
However, it doesn't give me the option to select a drive, but chooses my Personal account. How do I force the choice for Business?
​
Ignasz
If the remote supports moves, it will use that. If the remote supports copy but not moves, it will copy and then delete. If it does not support either, it will download and reupload.
So it depends entirely on the remotes you are using. See Overview (second major table). This is done when using commands like move
and moveto
or when using --track-renames
with copy
and sync
So the way I’d do it is different.
Instead of storing MD5’s, you could use the files date.
Check https://rclone.org/filtering/. There’s
--max-age - Don't transfer any file older than this
Which would help for this approach.
I would create a script so I don’t have to think about this. The first thing I’d do is take a timestamp of ‘now’, CURRENT_TIME
Then I’d read a file with the previous time stamp, PREVIOUS_TIME
I’d run rclone with --max-age $PREVIOUS_TIME
Then I’d update the file with the content of $CURRENT_TIME.
On the next run, CURRENT_TIME becomes PREVIOUS_TIME and you only get files newer than that.
> When I type rclone config n then local it prompts me to choose the name of the path, but not the path of the directory
You really don't need it with local since it is always implied. But you can do an alias remote. That will give you what you're looking for.
[mylocal] type = alias remote = /path/to/local # or Windows path. I don't know
Every remote has to have a name. So set the name to anything then you'll be asked for the path.
Or you don't have to even create a remote.
rclone copy C:\source C:\destination
should work.
> If so, rather than directly mount the shares should I configure "remotes" for the TrueNAS shares?
This is what I would do. One less thing to possibly break. Rclone can connect directly to the truenas server using ssh (sftp). You would configure this the exact same way in yruenas as backblaze but you would point truenas at your RPI instead.
Alternatively you can use zfs send/receive. Instead of syncing individual files, this would sync entire datasets.
writes wouldn't surprise me as still being pretty slow. IMO most people will want cache mode full to optimize the use experience. That's probably also the mode that is closest to what most people experience using an official client, so the expectation is for that sort of behavior and experience.
OP: Specifically the settings in question here:
> Where do I begin to search for the cause of this
Run your mount with -vv
and dig into the logs.
Also, just general advice: if you can (not always possible), use rclone copy
or the like rather than using mount to do uploads.
Also, could you be hitting API or bandwidth limits? Do you have your own clientID. That may help a lot!
Can you post an example of errors you're getting? Also, what amount of upload/download are you doing daily?
Also, are you using the default rclone client id or did you make your own? (https://rclone.org/drive/#making-your-own-client-id)
> https://rclone.org/filtering/
Thanks. A quick read suggest that is going to be helpful, especially allowing me to create a "backup list" file to reference. Hope to start playing with this in anger after a server rebuild!
Not sure what you mean. If you do not have a web browser on the machine you are installing rclone on, you need to have rclone installed on another machine (or a config file) for the authorization, as explained here https://rclone.org/remote_setup/. Connecting to http://127.0.0.1:53682/auth from the other machine will obviously not work.
To make rclone stop retring to trasnfer files you can use the flag --low-leve-retries=1
. This just avoids your problem and doesn't actually fix it.
You're getting tcp I/O timeouts which points to network issues. Maybe try setting up the remote as a sftp remote instead of ftp.
/u/impactedturd's suggestion is the best one but for one-off or if you do not have the ability to ls
, you can also use rclone cryptdecode
(DOCS). It doesn't need to actually connect to a remote
First, confirm via a hash that it is the same file. Not just shutting it early.
Second, I wonder if rclone downloads with a sparse file? Could be when using multiple threads for download. If so, sparse files look like the full size. Google around on how to get the real size of a sparse file. I don’t recall but I’ve seen it before
update:
Check out --local-no-preallocate
and --local-no-sparse
in the local docs.
My understanding is that mounting over "SFTP" actually IS just mounting over SSH for most purposes! Most all SSH servers have SFTP support, including the SSH server built into macOS you're already using. In rclone you'd use the SFTP backend to mount an SSH server. (there's no "SSH" rclone backend).
So no need for a special SFTP server software. And you can just use built in user file permissions to limit permissions. Though the appstore app might make permissions and setup easier.
bonus info: confusingly, FTP and FTPS are totally separate protocols. FTP is the old school standard file sharing protocol, and FTPS is just an FTP extension with encryption. SFTP is entirely separate and built on SSH. You might have downloaded an FTP[S] server?
The FTPS wiki article touches on this confusion: > FTPS should not be confused with the SSH File Transfer Protocol (SFTP), a secure file transfer subsystem for the Secure Shell (SSH) protocol with which it is not compatible. It is also different from FTP over SSH, which is the practice of tunneling FTP through an SSH connection.
Make sure /usr/local/bin/
is in your path environment variable. Use echo $PATH
to print it. The path variable tells bash where to look for the binaries when you run a command.
You can also run rclone by using the absolute path: /usr/local/bin/rclone config
. If that doesn't work then something else is the problem.
You can also use the install script that should sort everything out: curl https://rclone.org/install.sh | sudo bash
I really, super duper strongly suggest not using the mount for upload. Can it do it? Probably. But should you? no!!!
The mount has to do all kinds of stuff to make it seem like a real file system. And all cloud services throw errors. Using a mount means that it is (a) that much harder for rclone to fix (though using --vfs-cache-mode writes
can help) and (b) severely obscures the upload progress. (it may look uploaded since it copied fine but it isn't and breaking the mount can break it).
Use rclone via command line or a (web)GUI. Or if you don't need encryption, use one of the many B2 integrations.
If you really don't want your users to worry about it, I would have them drop the files in some central place and do a cronjob to rclone copy
(not sync
) and then delete. If you really want to get fancy, you may be able to use rclone union remote and have them drop it there and then still have a copy+delete script.
You didn't actually say what cache mode you are using. Is it --vfs-cache-mode writes
? In that case, I wonder, though do not know for sure, if it can't hold the file because it hits the max cache size. It may just not be an option
> it make more sense to just use --vfs-cache-mode minimal
This would probably fix one issue and give you another one. As I said above, cloud connections are not super stable and, to quote the docs, this can't retry.
Also, I don't see it documented but it likely can't parallelize the upload either. B2 is often considered one of the slower upload-speed-per-thread providers but does very well with lots and lots of threads. See rclone B2 Docs where it is suggested to use as many as 32!
All of this is to say, don't use a mount for upload; especially for large files.
Root folder ID
You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive.
Normally you will leave this blank and rclone will determine the correct root to use itself.
However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).
In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.
So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config.
NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone.
There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise!
Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.
It’s in the url.
> rclone about uno: > > Failed
Does 'uno' exist? Has it been properly created and configured? Do the access URLs work?
Rclone Commands -- full rundown on all Rclone commands.
I dont even know what you do know to begin with.
Step 1: Do you know how to make a remote?
Step 2: Do you know how to make a crypt remote ?
Step 3: Do you know how to mount a remote?
This is all stuff found at https://rclone.org/commands
Yeah, rclone is much more predictable and consistent than rsync - at least to my old tired mind.
A couple of lines from the docs that are relevant:
> If dest:path doesn't exist, it is created and the source:path contents go there.
and
> If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.
Assuming the problem is 53GB does not fit in one remote: Take a look at the "union" backend, which lets you treat the two 50GB accounts as one 100GB remote. Please make sure to read the complete documentation.
VPS is just like a remote machine, nothing special about it. Are you comfortable working with the command-line? If yes, VPS will probably be running a Linux-based OS; install rclone using the command here: https://rclone.org/install/, use rclone config
to set up remotes and then use rclone copy
to transfer the files.
Reading the docs the max size is 250gb so you’re good there. But it also says tokens don’t expire for 90 days. So something else is going on.
How long is it taking to run? Can you run with -v
or even -vv
and post the results if you really want to dive down to what’s going on
> Is the gdrive:backup
in the command you placed a remote to the user's GDrive?
Yes.
> If so, wouldn't I need to log in as the user to create the remote?
No. In this case, service accounts basically have admin access. So they authenticate rclone as an admin, which then has permission to impersonate any user of the domain.
Part of the setup process for service accounts is 'delegating domain-wide authority'. https://rclone.org/drive/#1-create-a-service-account-for-example-com
you can 'impersonate' other users. rclone -v --drive-impersonate [email protected] lsf gdrive:backup
https://rclone.org/drive/#drive-impersonate
I think you have to setup service accounts first. https://rclone.org/drive/#service-account-support
My apologies, I thought I read google drive and rclone. With drive you can create a remote and either mount it as drive on your system or use it in a copy command. Here is rclone config for photos.
Create your google remote
On your local computer, run the command.
cd instide rclone folder if on windows.
simple example - rclone copy remote:/path /local/file path -P
You got me curious. I have a large repo I sync (actually using my bi-directional sync tool). And it is on a VPS so I can quickly test!
Note, the directory where it is stored is called crypt/
and the crypted remote is crypt:
. The :
is the difference between encrypted files and what rclone says
Just the files at rest:
$ rclone size crypt/ Total objects: 4404 Total size: 30.582 GBytes (32837081825 Bytes)
$ rclone size crypt: Total objects: 4404 Total size: 30.574 GBytes (32828876817 Bytes)
This is a 7.82491 mb difference. I can also look directly at the size on the remote of the files:
$ du -sc crypt/ 32084172 crypt/
Which is off by a factor of ~1024. I am guess this is in megabytes and not bytes. Odd.
But this is still not a great comparison since I am relying on rclone to give me the sizes but assuming rclone size
is correct at least for the file system (which I think is fair!) then it is fine.
Just to compare to a quick python snippet
>>> import os >>> s,c = 0,0 >>> for dirpath,_,filenames in os.walk('crypt/'): >>> for filename in filenames: >>> s += os.stat(os.path.join(dirpath,filename)).st_size >>> c += 1 >>> print(f'Objects {c}, Size {s}') Objects 4404, Size 32837081825
I do seem to recall that rclone has a slight ambiguity in size comparison with crypt. Something like it a few bytes. I don't understand why and I could be misremember it but I suspect that is where the difference is coming from. I think it is safe to say, based on the first test, that rclone is trying to compute the sizes correctly.
If you're concerned, try running <code>cryptcheck</code> which will tell you more authoritatively what's missing.
So you know how it asks you "Configure this this as a team drive?"
Something tells me you overlooked the instructions.
​
Source: https://rclone.org/drive/
Configure this as a team drive?
y) Yes
n) No y/n> y
Fetching team drive list...
2018/05/19 12:58:25 DEBUG : drive: Saved new token in config file
Choose a number from below, or type in your own value
1 / TeamDrive
\ "XXX"
Enter a Team Drive ID> 1
You could probably use curl
and rclone rcat
(docs) but depending on the rclone remote, you may need enough storage on your machine to spool the file.
$ curl <url> | rclone rcat remote:path/to/file
Note again what /u/taijitu said and use a VPS or something if you do not have (or do not want to use) the bandwidth.
Well, if you want to do anything with rclone you need to configure the remote, if you don't know if it is configurated, probably is not. Go to rclone page and configure the remote first, after that we see your problem
I posted to the rclone forum, and found out why this is happening. It's actually documented behavior, although it's not that obvious - until it is, I guess.
Nick Craig-Wood from rclone explained it to me. It has to do with file name encoding, and is explained here:.
There is a command line (--onedrive-encoding flag), or a config file parameter that can configure it. I wasn't sure how to specify every one of the defaults except the pound sign, so instead, I just added this to my config file:
encoding = None
I think I'm safe, because I only want to use this command for Calibre ebook app, and since Calibre is a cross platform app, it's pretty good about choosing file names, and seems to avoid the ones that are not allowed on Windows, and probably oneDrive.
Anyway, it worked for files with the pound sign, and updated the original files as expected. If the files had originally been synced by rclone, they would have had the substitute characters to start with, so it would not have created new files.
More details on your setup would be helpful. What's the rclone command you're running and what does your config look like?
And what does "slow" mean in this case? Is it that accessing files in the mounted drive is slow? Or is uploading slow? And what do your usage patterns look like? If you're dealing with lots of small files (for example, syncing a constantly changing node_modules
folder) then the advice is very different.
Also OneDrive could just be rate limiting you if you've crossed some bandwidth threshold. I recall they do that.
One thing that might help, depending on what is "slow", is to wrap you OneDrive mount in a cache mount. This creates a layer of persistent local caching. (distinct from the vfs layer caching). Docs on that here.
You can generate a configuration for a remote from the command line rclone config create
l don't think can use rclone without actually establishing the config for the remote (i.e. you can't just pop an auth dialog and have it go). So you may have to do a little more work in your batch/script to get the user credentials and pass them to a create call.
Issue a rclone config create --help
to get more info, or read up more on it in the docs section of rclone.org
Then you can issue the copy commands once the remotes are setup.
> I suspect it is multiple ways to do the same thing and that's that.
That's what I suspect too but I just wanted to make sure.
> FYI, there is also <code>--password-command</code> which I think avoids the password sitting around in some global variable.
I read about this option too. As mentioned in the documentation and if I understand correctly, it's just an alternative to using the RCLONE_CONFIG_PASS=password
variable with password in plain text, unless if it's used with a secure program such as pass
password manager (which I don't use).
--password-command echo password
feels as insecure as RCLONE_CONFIG_PASS=password
to me, isn't it?
Not sure whether the read
+ export
combo is very secure either but at least the password doesn't lie somewhere in plain text, be it inside the script or in a separate file.
I am not sure what the difference is. I suspect it is multiple ways to do the same thing and that's that. Just FYI, there is also <code>--password-command</code> which I think avoids the password sitting around in some global variable.
Oh, yeah, I noticed that too. Rclone only allocates so much memory to determine what it's backlog of files to transfer is, if you have a lot of files to move, it may not get them all in the initial estimate. There's a switch, --max-backlog that lets you adjust how much memory is allocated (and so how much more accurate the backlog is).
I was ready a forum earlier where someone was mentioned there were getting 90MB/sec. Then in a later release of rclone, it went down to under 25MB/sec due to changing from google api2 to google api3, there's meant to be a work around in the code now (this could be all old news).
I'm using put.io with webdav. My speeds are fine now, but I am having trouble starting the rclone mount with NSSM. Spefically getting caching to work. It works fine if I start it with command promt. Perhaps there is something wrong\missing with my argument in NSSM?
mount remote: p: --config "[...]\rclone.conf" --allow-non-empty --allow-other --read-only --buffer-size 512M
> Enter this command into the command line:
curl https://rclone.org/install.sh | sudo bash
A brilliant way to screw up your system to a varying degree. Best case is you're putting untracked executable files into /usr/bin/
.
follow the manual here:
https://rclone.org/drive/#making-your-own-client-id
you log in to the GCP console using your Gmail account, setup a project, enable Drive API and create Client ID + secret to use by rclone
you don't need to setup a Service Account
you must set the Team Drive (ex-Shared Drive) ID for every endpoint, it's not possible to use the same configuration for different Team Drives (but you can use the same Client ID + secret)
--bwlimit=BANDWIDTH_SPEC
This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.
you're concerned about authorizing GDrive right? When you're setting up the new remote on a headless machine, just choose "No" when rclone asks if you want to use auto config: https://rclone.org/remote_setup/
Alternatively, you can just setup rclone on another machine and copy over the config file.
Thanks, it worked. I should've tried it before asking the question.
However I see that other parts of the official documentation can also be misleading, for example here https://rclone.org/commands/rclone_dedupe/:
Synopsis
By default dedupe interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.
This is incorrect - MEGA can also have duplicate files.
Nothing in rclone (except mount) is bidirectional, unless the documentation is wrong. Copy and Sync aren't. (https://rclone.org/commands/rclone_copy/)
The difference between copy and sync is that sync deletes data from the destination that isn't on the source (regardless of remote/local), while copy doesn't.
But, what you're suggesting is exactly what I'm doing....I'm going to put rclone copy
as a cron job on the file server. But, it takes 2 commands to actually keep them consistent.
e.g.,
rclone copy /local/datastore dropbox:remote/datastore rclone copy dropbox:remote/datastore /local/datastore
The first one pushes local changes to dropbox. The second pulls changes on dropbox (e.g., from another connected client) to local storage.
I'm trying to figure out if there's a real preference for which order to do them in, since I probably don't want them running at the same time.
My intuition is that it probably doesn't matter, but that push then pull is probably a little better because I use my file server MUCH more than any other connected computers and most of the point is an off-site backup.
Never mind. You are mistaken about the use of the size-only flag. Per the docs that flag compares files when syncing based solely on whether the file sizes match and ignores whether the modification dates and checksums match. It’s meant to speed up the scanning process.
I believe you the —min-size flag is what you want.
Per the docs:
--min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
You can not download something from a remote source (video site) to another remote source (mega) with rclone. You can however download from youtube to a local folder and then use rclone to upload to mega. The documentation of both programs will help. youtube-dl and rclone
Hmm..... I don't know if this will work or not, I came up with following steps.
change the above command to pipe the url to rclone copyurl.
https://rclone.org/commands/rclone_mount/
> The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user’s responsibility to stop the mount manually with
# Linux fusermount -u /path/to/local/mount # OS X umount /path/to/local/mount
Literally read the docs - I am sick of you not reading them and asking to be spoonfed. Read, try, try again, then a 3rd time to be sure, then google it, THEN post your issues,
First create a remote (unencrypted). Then create another remote of type "crypt" and choose your existing remote as target.
Storage> 9
** See help for crypt backend at:
<code>https://rclone.org/crypt/</code> **
​
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a string value. Press Enter for the default ("").
Mate, I have a lot of patience, but being lazy wears it down quickly. RCLONE.ORG - read the docs. If you go to the site, do to the "Usage" tab, youll see you just need to type "rclone version" and it'll tell you...
> curl https://rclone.org/install.sh | sudo bash
When I pasted this using right click ( as ctrl button does not work)
Could not resolve host curl https://rclone.org/install.sh | sudo bash : No such file or directory exit .
Btw how to tyoe this - | .
When I pasted it appeared as \
No mate, chill, I do this every day, I’m sitting on the toilet and I can tell you the steps to do without looking. Update it another time, let’s just do rclone now.
Copy and paste this and press enter:
curl https://rclone.org/install.sh | sudo bash
You can only do 750GB per day and 100GB if you try to do server side copies.
You can run rclone sync remote1: remote2: --disable copy --bwlimit 8.5M and let it run as that makes it so you don't hit the daily limits. Currently a bug with server side copy anyway so you need the disable copy.
You'd have to re-run the sync once it's done to pick up any new changes that came in.
Make sure to create your own API key/client ID as well.
You'd want to setup your own client ID/API key if you have not:
https://rclone.org/drive/#making-your-own-client-id
For copy, you can use --fast-list to help along with the key part as the key part is more important.
That should help with the 429 rate limits as they'll go away.
For the mount issue, you'd need to put the mount in -vv and drag/drop the file there and we can see is going on.
Normally the way vfs-cache-mode writes works is that when you drop a file in there, it will copy it locally first and upload it once it is written. This all happens one at a time. So if you drop a 40GB file there, it copies it locally and has to upload the 40GB which is dependent on your upload bandwidth.
With Google, you can only upload 2-3 files per second and 10 transactions per second. Those settings are going to generate a lot of rate limiting errors.
You really want to limit to lower transfers and lower checkers or it's going to be awfully bad (as you can see)
You should also make sure you have your own client ID/API key setup: https://rclone.org/drive/#making-your-own-client-id
I'm only learning here, but I don't think buffering to disk feature of VFS cache is the same as your expectation: first save locally, then upload asynchronously. To achieve that, you would need to wrap your remote in the "cache" remote. See Offline Uploading under .. https://rclone.org/cache/
Are you playing directly from your rclone mount?
Have you made your client ID/API key yet?
https://rclone.org/drive/#making-your-own-client-id
​
You have a hodge podge of settings. Are you using a cache backend ? If so, buffer-size should be 0M as it does its own thing.
​
--size-only isn't for mounts and can be removed.
If you aren't uploading any data from your mount, drive-chunk-size isn't needed.
​
If you are using the cache backend, the vfs-settings don't do anything as all that's handled by the cache backend.
You can do 10 transactions per second on Google Drive via their API so making those numbers huge, makes it perform horrific.
​
You should turn those numbers way down and test again.
​
Also, you want to create your own key:
​
It's definitely helpful to get your own API client_ID! I just recently discovered the benefits of this.
Go to the bottom of https://rclone.org/drive/ and follow the instructions.
Then you'll have to edit the rclone config for your drive to include the credentials.
AFAIK, you need to do more configuration for cache use. See here, https://rclone.org/cache/
I use mounting all the time, whatever I do while mounted is executed within the Google drive. I haven't tried cache configuration yet. It is still in beta and the forum discussions is a bit above me. I am definitely going to test it once they progress beyond beta. I use mounting mostly to diff between my local and gdrives before sync/upload.
My upload speed is horrendous. So, I haven't tried uploading or downloading while mounted. Using Rclone Browser with an optimized transfer configuration suits me better.
You can mount rclone (encrypted and unencrypted) remotes as disk drives to your system. I believe Mac and Linux have had this all along, but now you can mount in Windows too. See here, https://rclone.org/commands/rclone_mount/
After mounting the encrypted remote, it will show up in your operating system as any other disk and all file/folder names shown. Then, you can move whatever you like to wherever you like.
Do you mean client ID? You can skip that if you don't have one. But, it is very easy to get one. Just follow the instructions at the bottom of this page: https://rclone.org/drive/#making-your-own-client-id
Ok, I've found this app, which is basically an app for Rclone on Android with a GUI !
It works very well for me!
https://play.google.com/store/apps/details?id=io.github.x0b.rcx&ah=towlii3syYpu4J0-aIXFn-Iul3M
I don't have the time to start tracing down how this guy is performing a reverse proxy and if there's any shady shit going on, but I don't think that I would do this method AT ALL.
Do yourself a favor and just get a seedbox — Seedboxes.cc apparently has Google Drive integration, but I'd still encrypt it if I were to go that route (I'm not familiar with any options that they may support). Even this guy seems to hint in the video that Google might ban you for violating terms of service (which is clearly happening here).