You need to configure Cloudflare & B2 to work together. When setting this up with a private bucket (plenty of docs on doing that on both Cloudflare's docs and Backblaze's), most of the docs are focused on making the bucket public from Cloudflare's side. You need to create a worker that forwards the auth header to B2.
Worker:
addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) })
const B2_URL = "f000.backblazeb2.com";
/** * Respond to the request * @param {Request} request */ async function handleRequest(request) { let b2Url = new URL(request.url); b2Url.host = B2_URL;
let b2Request = new Request(b2Url, { method: request.method, headers: request.headers }) return fetch(b2Request) }
Then it's just a matter of configuring the download url in RClone to point at your Cloudflare endpoint.
--b2-download-url
- doc
It's actually super easy if you already know how to use rClone. Go to cloud.google.com and enter your billing details (you'll never get charged). Go to the console and create a "Project." Once you create your project, go to the "Compute Engine" and "Create Instance." Under create instance all you need to do is select "Micro" under the machine type and finish creating your instance. After that just run the rClone install script "curl https://rclone.org/install.sh | sudo bash" and you'll be set... From there you just need to run the rClone config commands to get each of your drives set up.
Shoutout to rclone and its crypto feature! This makes my backups as easy as rclone sync -P ~/SomeFolder encrypted-cloud:SomeFolder
and you can even mount your cloud provider as a network drive.
Choose one as primary. Mirror to the other one using rclone - https://rclone.org
$80 USD a month. You will need a domain. Check on namecheap or other registrar the cheapest TLD extension and look at yearly renewal price too. Less than $20/year.
$80 USD/month + $20/year for domain.
> I've determined the thing holding me back is how dependent my school life is on the onedrive desktop client.
You do know there are OneDrive Clients for Linux / ways to access OneDrive from Linux:
- Via the OneDrive for Linux client - https://github.com/abraunegg/onedrive - this 'syncs' your data, bi-directional operation, open source and free
- Via the 'onedriver' client - https://github.com/jstaf/onedriver - Native file system that only provides 'on-demand' functionality, open source and free
- Via 'rclone' - https://rclone.org/ - one way sync client, open source and free
- Via 'insync', 'ExpanDrive' - non-free client
- Via the web browser of your choice
The Google Drive API is great if you are looking to incorporate it into a larger application, or if you are looking to learn about Google APIs (which are relatively complex).
If you just want to sync files up to your GoogleDrive, consider https://rclone.org/
Here's the crontab entry which syncs my audio files up to the cloud. No fuss no muss. Initially running rclone config guides you through setting up the mount initially.
/usr/local/bin/rclone - copy ~/Music GoogleDrive:Music
Check out https://rclone.org/ - search the subreddit here for more info, and the documentation on the site is pretty straightforward (my only tip is to set up Gsuite in rclone with your own API key instead of the default to avoid slowdowns due to rate-limiting).
>They still get your metadata: The time/date you access your data from, the location you're accessing it from, when and how often you upload new data, the file names, the size of the files, etc.
Better tools like rclone (basically encrypted rsync for cloud storage services, MIT license) or encfs will obfuscate filesize and filename metadata. Of course that still leaves other metadata available to the storage service, like upload frequency and times. But at least it's better than an Owncloud/Nextcloud method of remote encrypted storage that only encrypts file contents, and leaves filenames wide open to the storage provider.
https://rclone.org/amazonclouddrive/
When you do rclone config
and pick amazon cloud drive. It first asks for you for your client_id
and your client_secret
which you enter what you find in your amazon dev security profile.
Also make sure you have the redirect URI in your Amazon security profile set to http://127.0.0.1:53682/
Yes, it's a major problem. A couple of things worth noting:
I think you could lose your gsuite account over copyrighted material. I haven't used it personally but people over at r/plex upload terabytes of pirated media on their gdrive without issues using rclone which has the capability to encrypt your uploads on the cloud.
rclone along with rclone browser has been an absolute pleasure to use when managing my personal Google drive, mega, school google drive and local storage together. They also have OneDrive integration. The best thing about it is that maybe one day you want to move from one provider to the next. With rclone you can move your files from one provider to the next effortlessly.
I highly recommend that you give it a try. It may seem a bit unintuitive if you're not familiar with rsync conventions but it's definitely worth learning.
I've never tried the amazon drive desktop app, but I highly recommend rclone! It requires using the command line, but once you get the hang of a few important commands it's like magic. There is also a gui available called RcloneBrowser.
Yup, link is accurate. (rclone has its own webgui, which MIGHT make it easier, emphasis on might!) https://rclone.org/gui/
once you have that setup, sync it with the assist of scripts; https://github.com/BinsonBuzz/unraid_rclone_mount
Went down this road myself and gave up, the mergerfs app would cause the array to fail to shutdown and every reboot required a parity check which drove me nuts after the 3rd time.
it seems that me not booting any docker containers caused the script to 'hiccup', but sometimes you just dont want to 'docker' like everyone else.
it may help, it may not, but yes, a lot of it is all linux command line. handy guide might help specifically; https://blog.kennydeckers.com/posts/mount-google-drive-folder-on-local-server-using-rclone-docker-container/
again, I gave up. CBF with all this. so I am probably not helping, but there is a wealth of info out there, just painful to navigate it. (meanwhile the community is begging SpaceInvader One to update his video about mounting Google Drive on unraid for v6, i've seen a lot of posts about it)
I don't know if it's the best way, but you can create a MinIO cluster on-prem and then sync data from S3 to MinIO using <code>mc mirror</code> or rclone.
Do you need to backup ZFS datasets including metadata, etc or just the filesystem data?
For the former I think the best you're going to do is a zfs send to a gzip archive then copy those over, but I don't think you can really do an incremental.
For just the filesystem data, I think good old fashioned rsync or rclone (seriously check out https://rclone.org/ its awesome) will suit you best.
Backblaze offers a cheaper but more complicated cloud storage system called B2, if you're willing to learn how to use it it's much more flexible. https://www.backblaze.com/b2/cloud-storage.html
I use rclone with it.
To copy the compressed file to your GDrive instantly, add it to your drive before using the latest version of rclone to copy it. It’s a new feature called server-side move/copy.
https://rclone.org/crypt/#name-encryption
> They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway.
To my knowledge there has been no formal review and verification of the implementation, And cryptography is something that is notoriously easy to mess up so take it for what it's worth.
I would highly rclone to upload your files. It's a lot like rsync except it supports cloud based storage backends instead of local/ssh accessible storage. However it does not support Glacier so you have a couple of options:
This is what I do. My plex box is Ubuntu 16.04 with 3 4TB WD Red HDDs in a RAID5 array. All my media is stored on there.
I then use rclone to back up my media to Google drive. I actually back it up "encrypted", although I'm not sure how much difference that makes. It does make me feel better though. The encryption piece is done on the fly, so everything is only encrypted when it gets uploaded. I run a cronjob nightly that looks for files that haven't been synced and adds them to my GDrive.
So far, it's working flawlessly. The one saving grace for me has been my fiber internet connection. I have 1Gbps up and down, so it makes uploading the data so much faster.
How about using rclone for that? You are already in r/rclone so just use it like this:
For a general tutorial you can find a ton on the Internet and the docs on https://rclone.org are also just fine.
Or as user friendly alternative "duplicati"
The number changes based on the backlog as rclone does not get all the information up front:
--max-backlog=N
This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.
Setting this large allows rclone to calculate how many files are pending more accurately and give a more accurate estimated finish time.
Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.
https://rclone.org/docs/#max-backlog-n
So it's working exactly as expected.
No doubt someone has written a consolidated guide, but you should be able to get it working from the two separate guides on rclone.org:
rclone copy
job to each of your remotes. (Copies won't sync deletesI imagine that you can access it, right?
If you can, add it to your drive (right click on folder -> add to my drive), after that, you can download it directly for your PC.
I recommend rclone, check their website https://rclone.org
Not so noob speaking here. For my data I currently use a strategy following the 3-2-1 rule using <code>restic</code> and <code>rclone</code>.
First, I use restic
to back up my data to an external hard disk regularly. It is a backup program with snapshots (to rollback if anything goes wrong) and deduplication (to save space).
Then, I use rclone
, which can be considered as the rsync
for cloud storages, to sync my whole restic backup repository to OneDrive for Business (My Uni gives 1TB storage for life). As the restic repository itself is encrypted, the data on OneDrive is also encrypted as well.
I don't back up the whole home directory. Instead, I cherry-pick real data (i.e. documents, photos, etc.) to back up. I have written a script to help me do the whole process.
And for the system backup, I use timeshift
. System backups are important as I'm using a rolling distro (Manjaro). I have also written a script to rotate the snapshots before doing each system update.
Feel free to ask me anything. I can also share the scripts if you are interested.
Yes, the server needs to be on, always. And JF only collects and catalogs the content metadata in its DB, so yes, the media needs to be retained for you to play it on JF. An alternative would be to use a cloud drive to store the data and mount that as a local drive (using rclone or similar tools) on your server, so that you do have to worry about content eating up space.
Rclone will do what you want.. but it's not like it sends direct from one cloud to another.. it will download and reupload them so you'll be using your connection to do it.
The way i've done it is to sync my local copy to both google and amazon at the same time.
there are other services out there like multcloud that will handle it for you.. Or you could buy a cheap VPS and just set it up to run for a couple of days using rclone.
However.. there are limitations of rclones implementation of Google Photos due to the GP API that's available :
From : https://rclone.org/googlephotos/
"When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort"
So with that in mind, i would upload them from your original local copy (assuming you have one) or use Google Takeout to get all your files out of Google and reupload them however you like.
https://rclone.org/commands/rclone_dedupe/
> In the first pass it will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged.
Well even more reasons to use rclone then, isn't it? I mean even if let's say somehow you could use I don't know GDFS (File Stream) to read the encrypted remotes (you can't but let's say you could) you still wouldn't want to do it, especially in a hurry:
curl https://rclone.org/install.sh | sudo bash
, that is if you don't have it already in your distribution's repositories (it is in ubuntu, termux, etc.)rclone is great, at first it is daunting without a gui, but you'll quickly overcome that and the documentation provided at rclone.org is great.
You can use rclone to mount your cloud storage. It will appear as a normal folder on your computer which you can see decrypted, you can move files in and out and they will get uploaded or downloaded from the cloud.
For big transfers or to sync folders it is best to use the command line.
I would suggest using NSSM to create a service to automatically load the rclone mount when you boot windows. You could then set up an auto sync or auto back up task.
I am using nextcloud which iirc is a forked version of owncloud. Their WebDAV address scheme seems to be the same ({installation_url}/remote.php/webdav) and rclone officially supports owncloud (via WebDAV see: https://rclone.org/webdav/#owncloud ).
I started doing a detailed write-up trying to answer some of your questions.
But really, let's take a step back: AWS doesn't support SFTP as a service, you need to spin up an EC2 box to run it. Sure, that doesn't cost that much, but it is complex. Running a server means you also have to respond to security updates, rotate the logs, monitor it for problems, etc. It would be *much* better if you can architect a simpler solution.
So, why don't you post the actual problem you are trying to solve? Is there a reason that SFTP *must* be involved? There are dozens and dozens of programs that can upload directly to S3. (CyberDuck, Jungle Disk, CloudBerry, Mozilla S3 extension, etc).
Even if you must use SFTP, you could subscribe to a generic SFTP service, then use rclone to move the files to S3. This will be far more secure than trying to run your own server without a competent system administrator.
I use rclone crypt. Free, open source, strong clientside encryption, frequently updated. Though it requires using the command line and you need to manually tell it when to sync.
>I'm not really a big fan of rclone and other tools to automate my backup. So I manually backup my files.
Why? rclone has encryption capabilities, and is compatible with backblaze
If it's not important to you, that the files are directly available in google drive, you can use rclone cache with "offline uploading": https://rclone.org/cache/#offline-uploading
That way files written into the mount directory will be written to a local folder at first, and then after some time uploaded to google drive.
It will be much more reliable. Rclone mount without cache isn't reliable at all, for writing.
Uh, what you are doing makes no sense to me.
Google Drive can return checksums for files you have stored on it. Rclone supports reading these checksums via the Google Drive API.
rclone reads my local files and computes a checksum and then compares this checksum to the one retuturned for the file from Google Drive. What you suggested with mounting would require you to download all of the data from your Google drive every time you compare your files, because you are computing the checksum locally in beyond compare.
I'm simply using the rclone cryptcheck command which is specifically built to check the checksums of the data you are storing on your encrypted cloud drive.
https://rclone.org/commands/rclone_cryptcheck/
I'm not sure what you mean by earlier checksum info to crosscheck. I am comparing a fresh checksum calculated from the files on my zpool (which I know are not corrupted due to ZFS end-to-end check-summing) with the checksums for my files returned by Google Drive API.
>Will that server be suitable for hosting a store with a small amount of traffic as well as RuTorrent with several torrents downloading at the same time for most of the day?
Yes. You can host anything on that server unless its violated ToS.
>So are those the only operating systems you can have?
Nope. They have more than that OS avaiable in control panel.
> Will that allow me to unzip stuff easily like in cPanel?
Yes. Login to SSH and do whatever you want.
>Does SoYouStart allow that as well?
Yes. But you have to do that by yourself.
>How do I go about installing Rclone and RuTorrent on the server?
Rclone: Login to SSH and do the following step in this link https://rclone.org/install/
Rutorrent: Check the right side in this sub.
If you can't do it by yourself, contact Anyd10Gbit or Quickbox. They will do it for you with fees.
Sure, just follow the instructions at the bottom of this page (https://rclone.org/drive/). Then you can enter those when you are using rclone-config as your client ID and client secret.
Also if you are mounting it as a drive I have some command line options set which seem to help as well.
> This can only write files seqentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount.
https://rclone.org/commands/rclone_mount/
Anyway, it's still possible to make it work. You just have to download to a seperate directory and the copy using rclone or copy it to the rclone mount afterwards.
Also from said page:
> rclone mount vs rclone sync/copy
> File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won’t do that, so will be less reliable than the rclone command.
I backup selected data (mostly DB dumps, dokuwiki and /upload ) from my VPS with rclone, through cronjobs. I use Google Drive (ok, not fully selfhosted) as target, but Rclone can use other targets.
I highly recommend Rclone. It allows you to mount your gdrives and much more (S3, blob storage, o G storage. You can do. A rclone config
add the details for your drive then a rclone mount <your_gdrive_config> <mountpoint>
and you can browse your gdrive as if it were a folder.
Ofcourse you can do that.
Since I had to go through this process frequently, I set up a free workaround. I cache the file into my RD account. Generate a download link. Use the download link with the command line program called RClone to directly upload the file into my Google Drive.
Note: This free workaround uses your bandwidth. Means, file is being downloaded into your computer from RD and SIMULTANEOUSLY uploaded from your computer to Google Drive or whatever free host you set up.
Use rsync or rclone to sync.
With rclone you can use your encryption on cloud providers https://rclone.org/crypt/
For rclone GUI try RcloneBrowser
Indeed. OP can also use rclone (docker hub image) to sync data to and from various cloud storages (amazon S3, google cloud storage, google drive, google photos, backblaze, etc), selfhosteds nextcloud, minio, webdav, SFTP, HTTP, Ceph... as an initContainer
as you mentionned to populate the PV, and as a cronJob to regularly synchronize data back to storage from PV.
We use it at work in a cronJob to sync data pushed by a contractor on a S3 storage to our "data to process" PV, and it's been reliable since.
rclone can be used to mount a gdrive as a local folder, then you can use whatever copy tools you like to fetch your files. It's in the standard repos but there might be a newer/faster version direct from the developer.
rclone can be used to mount your OneDrive folder in a way Plex can work with, though you still need your computer to be running in order to access your server that's running on said computer.
Rclone supports encryption, most of us on the plan use rclone with encrypted drives. Solves all issues in that regards. Is end to end encrypted. I and dropbox will not be able to access your data, even with a warrant.
I have recently gone through a similar journey, but with fewer notes. The hurdle you are going to have to get over is the fact that Joplin doesn't have their own Cloud out of the box (Evernote does); hence the difficult of RocketBook providing an out of the box path to Joplin. Here's how I do it:
1-Scan to RocketBook mobile app, then make my destination be Google Drive (obviously there are other options, but I'm already a Drive user).
2-I sync my Google Drive Rocketbook folder with a local folder using RClone and a cron job. You could use a different Cloud (OneNote), and in my case the destination is just temporary, as once RClone runs it cleans out the destination. My notes don't live in the Cloud.
3-I use the Joplin Hot Folder plugin to watch the synced folder. New notes that I brought down from Drive come in as PDFs and these are added as notes into Joplin.
PM me if you want additional information. My method is really not that hard and is automated so I haven't had to touch it since I set it up.
You can do this right inside rclone. Check out Union Remote. It's level of control is inspired by MergerFS but offers slightly different options.
You can tell it priorities, etc. You can even do things like make things write to local and then separately sync to the remote (like the VFS cache does but more explicit). There are all kinds of fun stuff you can do
You could also check out Idrive Cloud, which claims to be a cheaper than B2 and Wasabi. They have free egress and don't charge for transactions. Haven't used this service, but the company's other storage services seem popular among ex-Crashplan users.
Backblaze B2 is a little cheaper than Wasabi if you don't plan to download files. I've been using it for a while and have no complaints. FYI there was a facebook tracking issue not too long ago though, they've sent out an email recently about it. If you'll be using B2, I'd suggest rclone with the <code>--fast-list</code> option to reduce possible transaction charges. All charges are upfront and predictable from what I've seen.
If you'll be downloading frequently, then Wasabi is likely a better option since they have free egress, and don't charge for transactions. Haven't used it but have heard good things about it, it seems very popular as a storage backend for Nextcloud. They have a minimum charge though, and retain files for 90 days after deletion (which you still pay for).
If you have a handful of admin accounts in the different locations that have admin access to all of the user dirs, then mount it as a file system with rclone and copy it over.
Not sure about the permissions being copied here
Check out Rclone. I would recommend against backing up veracrypt containers since differential backups/syncs are not possible. (i.e. if a single file changes you need to upload 200 Mb). I had my doubts regarding rclone and not understanding it clearly, but this video cleared all my doubts. I hope this helps. This way you will have a single command to run which you can run automatically using cron jobs. Plus it is available on all platforms. Recommending this over cryptomator since you want it for backup purposes, so accessing it from your android should be possible using RCX but not as smooth as cryptomator.
You can use filtering to include or exclude folders. So you would set the source to be the parent to all the folders you want to sync and then use filtering so rclone only acts on the correct folders.
Google drive has a 750 GB daily upload limit. I think that is what you're running into. Just wait a day and you'll be able to upload again.
Take a look at the rclone tool - it can do do parallel uploads to S3 (and handles retries etc.). I'd be tempted to just use that rather than try reimplement it in Ruby - it's very reliable: https://rclone.org/
Are you looking for cloud storage where all devices access the same file over the internet? Or do you want file sync software that keeps copies on each device? What are you trying to accomplish? What type of files? How much storage/bandwidth will you be using? Maybe rclone.
Rclone does sftp and can do rclone mount (needs also some other free/open source program but it works well). Other than that a VPN would work too and be usable for more things.
I have not used the mount feature on Windows, but it says it works, and it is compatible with Mega.
I'm assuming you're paying for the 3 user business account? Just curious as I'm looking for a backup plan.
FWIW, I eventually had to move from Hyper Backup to rclone when backing up to Google Drive (G Suite).
I don't know why, but after I passed maybe 15TB or so, backups started to fail with some level of frequency. After I broke 20TB, backups stopped working altogether to Google. Switched to rclone and never had any issues again.
You can use a program like rClone, which allows you to sync and copy between your local computer and a ‘remote’ (Google Drive or other provider). When you set everything up, you can choose to encrypt and obfuscate the content. So you will look on Google and see random file names and encrypted content (preventing Google or anyone else from knowing what you have). Then, you can use rClone to browse and copy files back to your computer when you need something.
Maybe rsync or rclone might be an answer. It will fall behind in the first copy or two, but assuming the link is faster than the amount of data generated, it will eventually catch up.
Well, rclone doesn't support rapidgator.net or alldebrid natively. The list of supported cloud endpoints it can copy to/from is here. You said "Meg", but I'm assuming you meant mega, is a supported endpoint, but in this case wouldn't natively support copying between accounts in this way. The only reason it works between Google Drive accounts with shared links is because links that have been added to an account act like your own folders, which you can then use things like GDrive API calls to do this copying. In fact, if someone wants to go and manually copy files between "shared with me" or folders you've added using "Add to My Drive", basicallyhere's the Google Script to do it. The nice thing is that rclone keeps track of things and wont re-copy.
Free and cross platform :
Use rclone. There is plenty of documentation on their site. You can use it to do everything you want. Create a scheduled task to run `rclone sync` to back up your data to an encrypted volume created in rclone. `rclone mount` to make your GDrive accessible like a regular drive. etc.
​
Costs money but is fairly hands off once configured (Windows only):
I really like the Stablebit collection of software. Stablebit CloudDrive can provide you with an encrypted mount to GDRIVE. And Stablebit Drivepool can help you automatically mirror a local drive or set of drives into CloudDrive (encrypted) near realtime and automatically (no scheduled tasks). Grab scanner too and it'll help monitor your drives and let you know if something is about to fail, or has surface corruption, the you'll have far fewer instances where stuff just FAILS all of the sudden, basically you'll be able to see it coming.
Es gibt ein paar Blog/Forenposts zu dem Thema. Das zusätzliche Problem bei dir ist aber definitiv der geringe Upload. Das ist wahrscheinlich bei den meisten Leuten eher nicht so der Use-Case.
Laut diesem Post funktioniert es wohl jedoch, wenn man rclone einsetzt. Damit kannst du die Dateien auf eine andere Instanz spiegeln, da das Tool das via WebDAV macht bin ich mir nun nicht sicher, ob das in deinem Fall passt. Es sollte aber definitiv nicht zu einem Datenbank-/File desync kommen. Wahrscheinlich musst du Nutzer, Plugins etc auf beiden Maschinen einrichten und dann rclone laufen lassen. Kannst ja mal Feedback geben, ob das was gebracht hat.
I had done something similar when Amazon Drive had unlimited data. personally I used rclone. I had uploaded ~4TB at the time via encryption and mounted locally (which would see amazon Drive as a local drive and the data was unencrypted)
​
the downside is the rclone is CLI (Command Line Interface) only.
​
I would also recommend that the upload be done in screen or tmux (I prefer the latter) so that it runs in the background. once all your data is uploaded (dependent on your upload speed) you can then make a script or a cronjob to run every X day at Y time. ( I would recommend once a week when you will definitely be sleeping.
​
​
rclone to the rescue
Just configure your Nextcloud and AWS accounts as rclone backends, then use rclone sync
(or copy --recursive
, as sync removes files on the target that are no longer present on the source) to mirror the entire directory tree.
> My problem is that S3 cannot be managed like a real file system
This is correct, but not in a way that matters. Slashes in s3 paths are treated as "virtual" folders, and most tools that operate against s3 are smart enough to respect them, including the AWS CLI and rclone.
Not something I can answer, but you have a decent variety of flags that can tweak the command to your preferences
https://rclone.org/commands/rclone_mount/
You could also Chuck a remote behind an rclome cache then mount the Cache instead
No problem! It's really easy to automate this stuff using rclone and PowerShell or whatever you like scripting in. rclone can list files using json making it really easy to parse.
Why mount it at all? Rsync can work more easily over SSH without mounting it first. Or do you have good reasons to really mount it? If it's not local, but cloud you want to sync to, you might be interested in rclone: https://rclone.org
rclone can do that. You want to run two commands - first copy
with --max-age 1M
to copy newest media. And then delete
with --min-age 1M
argument specifying how old files to delete.
Rclone is quite straightforward. I'm Rclone crypting some files to my unlimited Gdrive account right now. The documentation on the rclone website is pretty clear
IT IS! I just finished something similar.
I have them upload the csv to google drive (in this case they export the csv from another service, but editing a google sheets doc is just the same). Then I have a cron job on a utility server that uses rclone to copy the file locally in CSV format, run the CSV through a python script that converts it to the JSON format that I need, then commits any changes to the file to the website's git repo. The website is static and currently built with jekyll and the commit causes the CD server to rebuild the site and deploy the latest version of the site to production.
You could do something similar and change the logic in the python script. Google Drive also supports webhooks so you could probably drop the cronjob, but I have yet to play around with that.
I also use google drive to hold files that people need to update and a similar cron job pulls them down and rsyncs to a different location for "public" consumption. So, you could create a directory for images/assets and incorporate that into your flow as well.
You're making more work for yourself.
Just do:
rclone move ~/private/rtorrent/data/Movies gdrive:/Movies/ -v --transfers=30 --stats=5s --drive-use-trash
That way you don't need to move folders on gdrive, they are placed exactly where they should be. You should almost never use the sync command - instead use copy or move.
Also quotations " " are only needed if you have spaces in your paths. It does no harm uses them all the time though if you prefer that way.
https://rclone.org/docs/ Read up on some nice features!
As for your seedbox requirements, i'm not sure how difficult getting ratio on HDBits is so can't really recommend a dedicated provider for that specifically!
Unfortunately, 1.37 copies files it really shouldn’t since they’re already in the destination. 1.36 works well. Documentation doesn’t help either. Rolling back to 1.36 for the time being.
I don't bother because I only store TV/movies/porn there but nothing stopping you encrypting before upload (I'd create digests of your files and a script to check so you don't upload the same stuff twice).
I use this client:
http://www.619.io/swift-explorer/
Cyberduck also works or you can use the OpenStack API to script backups.
rclone works too https://rclone.org/ <- use this if you want to encrypt.
Going in with no prior experience, I set up my own samba share on a Raspberry Pi using this guide.
As for sharing an rclone mount over a samba share, there are just two particular things to make sure you have done. Mount to an (empty) folder wherever you would like, just make sure it is in the samba share you made. The second thing is when you run the rclone mount command, make sure to add the argument "--allow-other" to the end. Windows didn't see my mount without it.
Hope this helps!
rclone is the easy and free to use.
rclone sync gsuite: ebay: -v
This will copy everything from gsuite to ebay and delete everything on ebay that isn't on gsuite.
Here is a useful tutorial which is about syncing local content to cloud but the same principals apply. Rclone's websitete is also very useful.
If needed you can get a google cloud compute instance using the free $300 credit so you don't have to waste your home bandwidth.
Sure did! Using two different Google drive accounts for redundancy purposes in case one gets banned or I lose the account somehow. I have a series of scripts that I wrote to upload the content to both my Drives using this Google Drive CLI tool.
I have both Drives mounted using rclone mount which is where Plex is looking for my Movie and TV Show libraries. When the CLI tool uploads content, I write the directory to find it to a file. After the upload is complete, a script reads in the file and uses the Plex Scanner command line tool to just scan those specific directories and avoid a ban.
I've never had any issues streaming from Google Drive -> My Plex box -> My home, has never paused to buffer or anything even with 1080p movies that are around 5-6 GB. I'd be happy to share my scripts, however, I'd like to clean them up first since they are a mess as I was trying to hack together a solution. :)
rclone asks remote directly you are using for checksum. In your case you are using crypt remote. And it will simply say "I don't know any checksum". If you would use directly ACD remote (without crypt) then checksum would work.
You can easily check this with md5sum
command:
$ rclone md5sum ACDcrypt:path/to/file path/to/file
$ rclone md5sum ACD:different/file 44615e4dff891c2469e586f9a5431abc different/file
Where ACDcrypt is crypt remote, and ACD is plain remote. You can see that ACDcrypt doesn't print out checksum.
If you want rclone explicitly compare checksums in crypt remote you need to use cryptcheck command. What this command will do - it will read nonce (random value from file in remote), encrypt local file, calculate checksum over the result, and compare it to remote file checksum if original remote supports checksum (ACD does).
I use rclone https://rclone.org/ to backup my Plex (and some other stuff). It can backup to backblaze b2, Amazon S3, Dropbox, it just about anything else you can think of.
You can (and should) configure it so that everything is encrypted as it is sent up, including file names. So if someone gains access to the bucket somehow (hack or subpoena or whatever) they see random file and directory names containing random looking data. They can see the file sizes and the directory structure (without names) which I think is ok.
As an interim/alternative, Air Explorer will sync folders with your MEGA account and allow you to exclude files larger than or smaller than a predefined file size.
Alternatively, Rclone is a command line tool, that can also achieve the same thing.
Your third alternative would be to mount your MEGA account as a local hard drive using Air Live Drive, permitting you to transfer files using standard desktop folder sync software, such as FreeFileSync etc.
> About the NAS. Partly why I wanted to get it was to have a faster archive and party because I wanted to experiment with self hosting plex server, password manager among other things.
My Plex server runs on a Mac mini, using cloud storage mounted with Rclone, which also handles encryption. Rclone also has a vfs-cache, so once a file has been downloaded, all subsequent access is local.
It also runs Bitwarden but those days are coming to an end. I no longer want to host critical infrastructure. I’m still evaluating my options there, which is why the Bitwarden service still runs.
> CCC is more regularly updated.
First of all, You’re the user, so whatever software you chose should meet your expectations, and only you can be the judge of that. My advice is to download trial versions of each and try them out to see which one fits.
I haven’t been following CCC closely, but ChronoSync is actively maintained, and there are current betas of their next version.
The number of updates mean nothing. Perhaps ChronoSync developers are better programmers so they make fewer bugs, requiring fewer updates ? Jokes aside, ChronoSync is rock solid, and it usually gets 1-2 updates per year.
Another plus for ChronoSync is that it’s a one time purchase. They haven’t yet, in more than a decade, charged an upgrade fee, and according to their website they never will.
You are using the wrong flags. It is confusing but all --cache
flags with a few exceptions are for the deprecated cache remote. It is all in --vfs-cache-mode
type stuff.
Remove:
--cache-tmp-upload-path=/home/me/my_mounts/tmp_cache/rclone/upload \ --cache-chunk-path=/home/me/my_mounts/tmp_cache/rclone/chunks \ --cache-workers=8 \ --cache-writes \ ... --cache-db-path=/home/me/my_mounts/tmp_cache/rclone/db \
Using --cache-dir
is fine but you need to add --vfs-cache-mode full
and probably many other flags. I strongly suggest reading the entire mount docs very carefully but especially on VFS caching
Also can remove
--fast-list \ --drive-use-trash \ --checkers=16 \
As the former does nothing on mounts (but is otherwise good for bucket remotes) and the the drive because you aren't using Google Drive. (so you don't need any --drive-<...>
flags. --checkers
aren't used on mount either
Is there a reason for --bwlimit=40M
? That is pretty low, right?
Start with the absolute simplest mount possible plus cache and --no-modtime
:
rclone mount --vfs-cache-mode full --cache-dir "<dir>" --no-modtime remote: mountpoint/
And then only add flags as needed. BTW< --no-modtime
is good to have if you don't care about it. It'll save a lot of work.
You don't need to setup a remote for rclone to access local files. https://rclone.org/local/
So all you need to do is setup a crypt remote for your destination to encrypt stuff.
Then do something like this: rclone copy D:/folder/ crypt:
There are only 5 reliable ways to access OneDrive on Linux:
- Via the OneDrive for Linux client - https://github.com/abraunegg/onedrive - this 'syncs' your data, bi-directional operation, open source and free
- Via the 'onedriver' client - https://github.com/jstaf/onedriver - Native file system that only provides 'on-demand' functionality, open source and free
- Via 'rclone' - https://rclone.org/ - one way sync client, open source and free
- Via 'insync', 'ExpanDrive' - non-free client
- Via the web browser of your choice
It's easier than you think. SSH into your Synology, run the install script:
curl https://rclone.org/install.sh | sudo bash
Follow my guide, but ignore the Docker parts; just look at the very end of the commands (e.g. rclone config
and rclone lsd onedrive:
and rclone sync --progress --dry-run onedrive: /data/OneDrive
) since you're running the commands locally on your Synology instead of through Docker.
Once you get the configuration done and you're happy with how the sync works, you can stick the rclone sync
command in your Synology task scheduler to execute at regular intervals.
I haven't tried this with Google Drive, but I found that Cloud Sync was unable to see shared OneDrive folders, however rclone
can see them.
So I use Rclone to do this sync now. From the documentation, it looks like Rclone supports shared folders (noting that they have a --shared-with-me
flag).
I wrote a guide on what I did to detail the settings and such for OneDrive, the process should be very similar for Google Drive. I ended up using Docker for this, but you don't have to - you can just install the binary.
Easiest way is to use rclone config
on another device and then copy config over to roms/gamedata/rclone
but you can also do it manually, for samples check https://rclone.org/ "config" button for your backend. You can see config file contents between -------------------- .
I used B2 so I needed to also adjust cloud-sync.conf
to point to correct bucket/path.
My config looked like this:
[351remote] type = b2 account = secret_hex_number key = secret_alphanumeric_string hard_delete = true
(351remote name also comes from cloud-sync.conf
)
rclone
indeed works and it's an awesome piece of software. I configured it myself following their official guide. Maybe try it and report where you are stuck?
Also, how confortable are you with a terminal? just to know how to address your questions
Hello there
With a few hours on my hands, I came back to this...
So I tried using the chmod +x /usr/bin/local/rclone
command.
I get the response: No such file or directory
I am now doubting if this Rclone was installed properly, if at all...
I used the Script installation: curl https://rclone.org/install.sh | sudo bash
I'm using rclone on a linux machine to backup files from onedrive/sharepoint : rclone
​
Edit : I use it for Syncing the files, not downloading the whole archive again and again.
Just to clarify, rclone supports B2's server-side copy (see Remote Overview) as of 1.48.0.
So as long as it can be done, it will. I've seen files copied way faster than my internet could possibly do. And you could verify with --dump-headers
> how to do it right
As long as they are the same account (or maybe same bucket in their implementation? I don't know), you don't need to "do" anything. Just rclone copy b2:bucket/old b2:bucket/new
will work. Or move
which will actually copy then delete.
There are only 5 reliable ways to access OneDrive on Linux:
- Via the OneDrive for Linux client - https://github.com/abraunegg/onedrive - this 'syncs' your data, bi-directional operation, open source and free
- Via the 'onedriver' client - https://github.com/jstaf/onedriver - Native file system that only provides 'on-demand' functionality, open source and free
- Via 'rclone' - https://rclone.org/ - one way sync client, open source and free
- Via 'insync', 'ExpanDrive' - non-free client
- Via the web browser of your choice
I think so.
Normally, when you create a client id you basically give rclone totally access.
https://rclone.org/onedrive/#getting-your-own-client-id-and-key :
5. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read. Once selected click Add permissions at the bottom.
But you can set more limited permissions: https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/permissions_reference
Files.ReadWrite.AppFolder
might be what you want.
> Still learning this stuff.
Seriously, the way you "learn this stuff" is to read the docs for the commands you use. You don't have to memorize it. You don't even have to read every word. But you should at least have an idea of what is there so that when you do need a feature, you have some idea that it is there.
I mean, it is a section header! Please, read all of the docs for mount and your remote.
Also, is there a reason you are using --allow-non-empty
? That really shouldn't be needed.
And same for -L
. By context, TS3:
is a Gdrive which doesn't support links. So why have a -L, --copy-links
flag?
Along the same idea of RTFM, also make sure you understand what flags you use and in general, use as few as needed