install http://www.secfs.net/winfsp/ , you need that for mounting.
open command prompt go to your rclone directory (like c:\rclone). Now you can use rclone commands to start mounting or backup or whatever. Here's my current mount as I think it works faster than my cache-mount-test,
>rclone mount crypt: X: --allow-other --dir-cache-time 48h --vfs-read-chunk-size 10M --vfs-read-chunk-size-limit 512M --buffer-size 512M --stats 2s -vv
crypt: is my encrypted remote for the google drive.
X: is the drive letter for the mount.
if you want to try with your cache remote, just use rclone mount 'name of your cache-remote' --options --options1 etc.
Unfortunately, 1.37 copies files it really shouldn’t since they’re already in the destination. 1.36 works well. Documentation doesn’t help either. Rolling back to 1.36 for the time being.
Hetzner Cloud is their VPS line. I tried the CX11 and was only billed 50 cents after using it for half a week. Most people recommend the CX21 for Plex as you'll get double the CPU, RAM, and storage- especially if it'll be your main server.
With Hetzner Cloud you can also make snapshots (easy to restore backups) of your servers. So if you went with a lower end option initially- you might be able to make a snapshot and then redeploy it on a higher end option. I can't say for sure as I've never used the snapshot option but that's how I understood it.
I personally haven't seen anything below the ~30USD price point, and haven't seen VPS or virtual plans that offer enough local storage for the whole usenet suite. I had good luck for awhile on a Xenon 5520 with 16GB of RAM and 4x250GB drives from Wholesale Internet. It locked up periodically though, old hardware I guess. Just briefly tried Scaleways (neat concept, horribly oversold so you're liable to loose your server when you need to reconfigure something), and just switched to a Hetzner auction server with a pair of 2TB disks in RAID0, an i7 and 32GB of RAM.
I was nervous moving overseas (I'm in the US, Hetzner is German) but I had no buffering issues with either Scaleways in Paris or Hetzner in Germany. Both have actually performed better than Wholesale, who was on the same continent as me.
Well, currently I'm looking into odrive. Supposedly good speeds, but their linux documentation is extremely poor. Having troubles figuring out how to move my data to my server in order for it to upload to a Google account.. :/
Hmm, you're right. The plexdrive README is inaccurate then.
The numCPU call returns logical processors, so that should include hyperthreaded cores, in which case on my 4 core hyperthreaded CPU the defaults should be:
--chunk-load-threads=4 (runtime.NumCPU()/2)
--chunk-load-ahead=7 (runtime.NumCPU()-1)
--chunk-check-threads=4 (runtime.NumCPU()/2)
--max-chunks=16 (runtime.NumCPU()*2)
That doesn't make sense to me then that my new settings would work better than these defaults. I'm going to have to do some more testing. Perhaps it's solely the increased max-chunks that is causing it to perform better.
So I was looking at the i7 6700 (MC-32) or E3-1270v6 (SP-32) servers from OVH? Is that overkill or what I might want for decent streaming with transcoding?
https://www.ovh.com/us/dedicated-servers/game/171mc1.xml
or
https://www.ovh.com/us/dedicated-servers/enterprise/173sp2.xml
I wouldn't move them to the mount as personally I find writing to rclone mount to be not super reliable.
I would create a simple post-procesing script for sabnzbd that uploads the completed download with an rclone copy command.
Just wanted to throw my hat in the ring -
I use Google Drive for both Plex as well as a backend for my CDN Origin (using a container to fusemount Drive to a folder serving via Caddy).
I run a round robin of minimum of three Google Drive accounts and use a service called CloudHQ (https://www.cloudhq.net). It's $58/year but it is immensely helpful in not having to keep up with rclone or risk issues with it. They keep my three accounts in uniformity. I recently lost a Google Drive account but had everything in the other account and was able to recovery easily without issue.
If you want to keep your google cloud compute credit or have already used it, I had a decent experince with https://www.multcloud.com when the rclone keys were revoked from ACD.
The transfer does take a lot longer, but it was admittedly a cheap and easy alternative.
Haven't tried it personally, but they do say it's available on Windows https://rclone.org/commands/rclone_mount/#installing-on-windows
Although there are performance differences between FUSE & WinFSP.
Yes. Here is the doc page. https://rclone.org/cache/
You point everything at the rclone mount point. However, the two variables you are looking for are -tmp-upload-cache which you point to your SSD so they will be initially stored there and then --cache-tmp-upload time which is how long things stay on that directory before being upload to gdrive. You want that to be long enough for files to complete before being moved.
You can use rclone union to pool the drives without formatting. The only issue is you can only write to one drive, although you can change the drive at any time in the configuration.
Then you have a few options.
I suggest 2 for manageability. With 2 you only have one program you are dealing with (rclone). You don't also have unionfs and you also can then remove your script that copies files to gdrive (rclone will take care of that). To me it is a simpler setup.
So I'm going to point you at a place to start reading at. Feel free to come back and ask questons and I'll help. start reading here . https://rclone.org/cache/
I run this on my 160TB library with great results.
You can use rclone moveto. For automation, since you've created a unique problem relevant only to yourself, you may have to write some of your own scripts.
Dammit, it seems like Google Gsuite/Workspace isn't feasible anymore. No unlimited storage anymore, only for enterprise customers.
I wonder what happens to all the people who have 10+TB in their Gsuite Drive.