I used to use borg, but didn't like requirement of having a local copy all the time. I switched to restic which supports a variety of protocols for direct backup. My setup uses a Minio (s3 compatible object storage) server running on my NAS. That bucket is then backed up to Backblaze to complete the 3-2-1 solution.
Fairly rudimentary, prone to ghosting without block-level copies. It's better off using a durable differencing backend - git would work - to store the minimal set of changes and sync that chunk over as well as provide a means to restore from those snapshots. Backups are a deceptively complex quick solution; I'd encourage spending as little time reinventing the wheel and maybe even too building a wrapper around restic or Borg that are purpose-built for this kind of thing. rsync is a quick fix and ends there.
Edit: to clarify, rsync precalcs what it sends over, so files added/removed between calculation and actual transmission are omitted. You're better piping through pigz or GNU parallel to speed up transfers going this route as well.
I'm using https://restic.net/, an open source project that runs on Windows, Linux and Mac. I'm using it on my Windows 10 laptop, running a daily backup of my Users folder to an remote Linux server. Been running for about six months now and the only time I think/notice it is when the backup runs at lunch time for a few minutes.
Use a real backup client like Borg or Restic - they have all the features you want, plus encryption and deduplication. You're sort of reinventing backups on your own here, and it will ultimately save you time to switch to a more heavy-duty tool. (The extra setup time to switch is worth it in my opinion.)
As /u/Personal-Fuel4621 suggested, git
with remote repositories is great for backup, and git-crypt
gives you the privacy you'd want. That was my setup for a while, until I upgraded my self-hosted environment.
Another option, if you merely want backups and don't want to bother with git
, is <code>rclone</code> (and its crypt
backend takes care of encryption). It supports pretty much any cloud storage provider you can come up with, and it's FOSS. Just run it with cron
, and you're golden (or enhance with something like <code>restic</code> to manage backup versions).
I'm a little disappointed that I don't see anyone using Restic and Backblaze B2. I'm considering this setup and was hoping to read about how well it works (or doesn't!)
Well it doesn't have to be a traditional cron job, it can be a systemd service like I was saying too. The things I monitor in my healthchecks...
I would use a simple snapshot-based backup software. Something like Restic or Kopia, but there's a lot of these out there now.
I don't see it mentioned yet but restic (https://restic.net/) is pretty nice.
Like the others told you, you need monitoring on that specific process
​
One thing I didn't see in this thread is that you should also know what's in the backup. I'm not talking about just the nature of the data but also specifics about the business context that help you differentiate it from your other backups. If you roll back a DB to a previous state where it's missing schema changes or you end up restoring access wrong access rights or user passwords then it's better to know that in advance :)
​
Then you also need to verify that you can restore the data, as others pointed out, so you should have restoration systems implemented: In place, point in time, full disaster recovery (new machine, different availability zone, different region, different provider)
Other thing to note is to make sure your backups are redundant (to a point) and not just left in the hands of some other org to manage.
Convenience is great until you suffer an outage and your offsite backup partner is down/unavailable too.
You should understand your risks and tradeoffs for each failure scenario you're prepared or unprepared for :)
Io su Linux uso restic e mi trovo benissimo. Oltre a poter fare backup incrementali ha anche un meccanismo di de-duplicazione dei contenuti, che permette di avere più versioni del backup consumando poca memoria.
Inoltre è possibile impostare una policy per l'eliminazione dei vecchi backup.
EDIT: oltre a poter fare backup su disco esterno, è possibile impostare un repository esterno. Sono importanti vari servizi come Amazon S3 e backblaze B2
EDIT2: avevo dimenticato le due cose più importanti. I backup sono criptati e il software è open source.
When I was recently looking into whether I might have to over-engineer my own backup solution, I ended up deciding to use https://restic.net (it’s FOSS). It’s actually quite simple to set up, pretty great IMO. Also, restic plays nicely with rclone, if that’s the kinda thing that would float your boat
It doesn’t. Not directly at least.
But they added rclone as a backup destination so any service that works with rclone should now work with restic.
I can highly recommend restic. Single binary, content-addressable storage (=> deduplication & incremental), very fast.
In addition, it can also easily do offsite backup to various object store services such as Google Drive, S3, ... and BackBlaze B2 (which I'm using).
So for backups, the cleanest and simplest way would be to use Timeshift and create a profile that goes to an external disk. I'm a glutton for punishment, so I use a command-line application called restic. I've written a simple script that runs the backup and prunes old ones. Please bear in mind that both of these programs do not support compression natively. I have used Duplicati and Duplicity (with the deja dup front end) in the past and they are also good applications. Duplicati runs a "server" on your computer that you can access with a web browser to set up the profiles and manage backups.
​
As for the USB key, Arch's wiki has a fairly in-depth article on the subject with examples to get you on the right path. The hardest part for me was just getting the kernel variables correct. Side note: I have not tested this in conjunction with the keyfile in the initramfs option (encrypted boot w/ automatic root unlocking).
The thing that comes closer to it is Restic and has the bonus of being native for Windows as well, if matters to you.I've been testing and playing with it, works well and basically has the same feature set as of Borg (with the lack of compression).
The main problem I have with it is that it's package size is not 500 MB like Borg, but only 4 MB (!). This is extremely inefficient for transfers over the Internet. They're working on changing it but they're far away to complete it.
First a version control tool is not a backup tool.
If you already have a NAS why not using solutions like BorgBackup or Restic which are very helpful. From your local machine to your NAS and from your NAS to an off-site backup (for example to cloud storage).
Second if you would locate the location where you push to you changes from your local machine to the NAS (maybe I'm note sure about that) you already have a redundant copy of your local hard drive. The NAS will be backed up hopefully off-site as well?
Apart from that if you have a parent folder and sub folders with git repos git will usually not allow you to add them via git add ..
it will usually try to give you the hint to use submodules/subtrees. As you already wrote...
You access S3-Storage using it‘s own interface.
There are many providers of S3-compatible storage, for example: - BackBlaze as a cloud-storage-provider (Datacenters in the US as well as in the EU), I think they have the best prices; - MinIO as an onPremises solution, you would install on your own metal;
I guess the 80TB consist mostly of large media files…!??
S3 is an object-storage, thus designed for storing files…it seems to be a good fit for media files.
Depending on the primary storage of those 80TB, you could use something like restic to backup those data directly into an S3-Storage…
…Veeam can use S3-Storage, too…
Schau dir mal bitte Restic an (https://restic.net/), damit ist es wurscht, ob der Hoster bei der Crypto schlampt. Ich sichere damit nach Backblaze B2, aber das unterstützt auch anderes.
Was mir gefällt ist wie rattenschnell der das Delta für Snapshots berechnen kann. Sogar mein Handy sichert nachts damit (Tasker + Termux)
Einziger Wermutstropfen noch: Das prunen von Snapshots ist aktuell sehr langsam, es wird aber aktuell an diesem Thema entwickelt.
Aside from what everyone is telling you about not using SHA-256 for password hashing at this point...
Sounds like your problem isn't the hashed passwords, but just creating backups. If you encrypt your backups before sending them somewhere, you can store them on Google Drive too: at that point, they're encrypted anyways. You can look into restic with a rclone backend to perform end-to-end encrypted backups and store them in Google Drive.
Performing a large-scale recovery from the versions feature will be a nightmare. It could also chew up a lot of disk space. By all means use Syncthing to copy source files to the server - then use a backup program for the actual backup. I use restic for this.
Thanks for your response!
> So, that's why I recommend the 3-2-1 backup strategy
You're right of course. My intention all along was supplementing rsync with either:
If you want to keep your backups as .tar.gz, you can do a little change and send it to the cloud as .tar.gz.gpg. Install the public keys of the people/machines that should be able to access their content in the servers that generate those backups, encrypt the files with all those gpg keys (you can do a single encryption with multiple recipients) and then send them to the cloud backup. Only the people/machines with the private keys will be able to decrypt them.
But there are a lot of open source backup software that encrypts in a pretty safe way, and some of them have builtin the functionality of sending the backups to the cloud, like, i.e. restic.
The first home server I built was essentially just a desktop computer with a bunch of disks in it. Still using it now actually.
ATX is familiar. Just find one with enough 3.5" bays and a motherboard with plenty of SATA ports and you'll be good.
I decided to go with software RAID (Windows Storage Spaces) but there are lots of options software-wise or traditional RAID card.
For backing up to GD I'm currently testing out restic + rclone. Working well so far.
Hey there. Same setup here (venv on a Intel nuc).
I'm a huge fan of restic (https://restic.net/).
Backups are done via from job every night. The cron shell script takes care about the snapshots and deletes old files aswell. My policy is to keep seven daily, four monthly and 12 yearly snapshots. Backups are fully encrypted and done incremental. So you don't have to care about losing secrets or disk space... Every snapshot can be easily loop mounted.
If interested I could share my backup script!
> Firstly, is this a correct understanding of how the tarballs would be seen by rsync?
True, due to the encryption it is not trivial to deduplicate backup archives. It has to be done before the archive is encrypted.
> Secondly, what’s a standard workaround to this problem?
use a tool designed for the job, e.g.
borg or restic. Others are listed here
There is also tarnsap, but the license forbids you from modifying the code or using it to interact with a self-hosted server.
I've been running this for years. It's an interesting plugin but generates a lot of false positives if you are writing a massive number of files at once or through some files that are modified in place (ex docker.img)
I haven't tried to correct or decided which copy of a file is correct but the process would involve comparing the checksum values generated for each of the files.
If bitrot correction is important to you unraid probably isn't the answer. I'd go with something like ZFS.
For offsite backup you may also want to consider Borg or restic. They maintain versioned, de duplicated backups of your data. If bitrot on your main copy does occur it will be detected as a "change" during backup. You can catch that in the logs and decide for yourself whether the file should have changed or whether it was a case of bitrot. Either way you have a versioned copy of the deal and you can decide to recover whichever one you want.
As a side note, you might be interested in QCTools which is designed to analyze videos. One of the key use cases is validation of video archive integrity.
There are a number of "deduplicating" backup programs which give you the "best of both worlds" in those regards: Each subsequent backup after the first only adds about as much space as the files that have changed, but every backup functions as a full backup, even if you remove intermediate ones. Restic (my backup tool of choice), Borg, and Duplicacy are probably the most well-known ones. With them, you can make backups every day, and keep them going as far back as you like, without taking up a ton of extra space for unchanged data.
Arq backup (backup up to a cloud is optional) It's not free, but definitely worth the price in my opinion.
A (terminal) tool which requires more technical knowledge, but free, is Restic
I use to think that sync files was the solution. The best solution I've found is to make backups. Encrypted backups are really useful. There's a tool called restic that can do just that. The first backup will take a long time depending on the amount of data. Other backups after that are pretty fast. The tool is really fast itself but again, it will depend on the amount of data being backed up. I honestly stopped using sync tools after doing this. I use Syncthing for some files but just to sync between my desktop, laptop and server, and there are not a lot. Those files are in my off-site backups. I know restic could seem like a lot for someone not use to work with this kind of solutions but it is really awesome, it uses deduplication which saves a lot of space, it encrypt the data before uploading and it's really versatile with backends. If you're doing backups in Google drive, for example, you can just use the rclone backend. It works for windows, Linux, *BSD, Mac, etc. You can search in the docs and the forum for more info:
Restic: https://restic.net/ Forum: https://forum.restic.net/
If your data doesn't have a high turnover, then I seem to remember restic from a while back. I think I might look into altering my existing backup strategy actually, because while duplicity
is nice and all, it requires a new full backup in order to delete an old chain of incremental ones, and I'm having some trouble in getting my Raspberry Pi 3B+ to create a backup without the process getting killed for some reason.
Also note that the Raspberry Pi's USB and Ethernet ports share the same bus, so the bandwidth is significantly lowered. Check out hackerboards.com for a list of others that might do the job. The Tritium H3 (uses the Allwinner H3 SoC) by Libre Computer looks good, as do the Orange / Banana Pis.
Hm I am currently using restic (https://restic.net) as a backup tool. It will immediately fail if it can't connect to the NAS, so testing on a small script shouldn't be necessary. A notification in case of failure would be nice though.
Try restic https://restic.net/. It is a single static binary that is able to backup, deduplicate sources to a varying number of cloud destinations including S3.
Runs very fast from CLI, and supports encryption as well.
So do we at this time, and I'd recommend it - backuppc has never let us down, and we have a lot of data going in (and, fortunately, not that much going out again on a regular basis :)).
We are, however, evaluating restic as an eventual replacement. While the loss of the very powerful webinterface will definitely be felt by some, restic is just so much faster than backuppc in day to day operations. We're not yet sure how to properly manage repos and centralized authorization, but I'm positive we'll get this sorted out.
> die für die private Archivierung von elektronischen Dokumenten (Bank, Rechnung uvm.)
> Mails (Backup)
https://isync.sourceforge.io/ unter WSL2
> Damit möchte ich zum einen die Speichermenge reduzieren (zip/compress) und zum anderen einen etwaigen Schreibzugriff verhindern (only read access).
Controlled Folder Access & NTFS ACL & NTFS compression
I would add Restic to the list of backup software. As it's very similar to Borg afaik, I'd put it as " Borg/Restic^CLI ".
Restic has one significant advantage over Borg in that it supports both Windows and Mac natively. Linux beginners will mostly be users who have previously used Windows/Mac, not people who are completely new to PCs, so I'd say it's worth showing them backup software that they can use on all systems. It's a bit easier to switch to a new OS if you can reuse some of the software from the old one and not have to learn everything from scratch.
Try using rclone in combination with restic or borgbackup. They can keep the old versions for you while only saving changed files again and deduplicating your Backups.
You can initialize a borgbackup repository in append-only mode. Borg can deduplicate, compress (e.g. with zstd) and encrypt. If need to attach it to some S3 compatible interface, you can use rclone (sftp)/ sshfs for that. It isn't the most robust thing in the world but we manage to compress more than 2 TB of backups in ~ 170 GB of S3 storage space using this setup. It might be good enough for some.
I haven't looked into restic.net lately, they had some speed improvements, that might be an option too.
Some like Backblaze. To be fair, some hate it. Their B2 service allows you to store anything you like (instead of picking "what matters" like their normal backup tool), presenting an S3-alike interface so any of the solutions targeting S3 can be used as long as you can use a custom DNS endpoint name.
You might look at restic which can store into B2 (and more) and uses encryption on your side so that the storage provider cannot see your data directly.
I use Restic via a cron job, to backup to S3. Works pretty well, does encrypted, incremental backups.
Thanks to it being incremental, I don't exceed the free tier in AWS. I don't backup my media using restic though. Restic lets you blacklist folders while backing up.
There is a new crop of backup software that introduces an entirely new concept for backups, namely "snapshot backups". Not all of them call it the same.
The old backup concept is of "fulls" and "partials". A full backup is a backup of your entire backup set (=the set of files you decided are worthy of backup) and a partial is a backup of everything that changed since the previous backup.
The new "snapshot" concept is that each backup looks like a full backup, but in actuality it is just a snapshot of what you have in your backup set, with some files (those that changed in the meantime) are actually new while others (which haven't changed) are references to previous snapshots.
The upshot of this concept is that you treat each snapshot as a frozen moment in time, which holds a version of your files from that moment. You can walk backwards in time and get your file from backup from before it was corrupted. Traditionally you would have had to go to your recent "full" backup, then apply more and more "partial" backups until you find the one that has your corrupt file, and then do it again except for the last "partial". With the new concept you can run a comparison between the snapshots focusing on that file until you find the last time it was changed, and then just grab it.
The implementations of it can retrieve different versions of your files quickly, even from on-line backups.
Software that implements the new concept is restic, duplicati and a few others.
Spideroak. I used it for a while. Michael interviewed the CEO in episode #97.
I currently use restic+Backblaze B2, so everything is encrypted before hitting their cloud.
You might want to try out Restic. It can back up to a number of destinations. But as mentioned before Veeam is also pretty great!
I'm hosting on linux so you're going to need to do a little converting... but I use a cron job (scheduled task) make and maintain backups with restic. Restic handle the heavy lifting of deleting old backups and deduping the files. I run a backup every 5 minutes and do a prune the backups based on age. There's a term for it and it's escaping me now.... but I keep the last 12 backups, so every 5 minutes for the last hour. I then keep the last 24 hourly backups, so the backup closest to the hour, so 9 am, 8 am, 7 am, etc. then 28 daily's, 26 weekly's etc. The further back it goes the less granularity it has.
Your $HOME and configurations there inside the dot-folders and dot-files you made, in /etc even some in /usr.
Using restic or borgbackup is a very good option as they have incremental backups with deduplication, and you can also mount your backups to restore individual files or folders.
You can come up with your own strategy for backing up your files, but It is kind of like reinventing the wheel. It is better to use a pre-existing tool (for example, restic) that is much better tested by a lot of users and proven to work.
I've been using a tool called Restic https://restic.net/ for a few years now, and I highly recommend it since it's so simple and has some really nice features. It's pretty easy to script, and it works with a lot of external / cloud drive services as well as local repos
My current workflow for backing up is to use restic which I use to encrypt and sync to Backblaze B2
For categorising I've tried a few apps, including Dark Table and ON1 but I've since settled for the built in Mac Photos. Surprisingly, Photos actually does most of what I want with the latest version of MacOS. Affinity also has some extensions that integrate with it too which is quite nice.
If you want root access & bare metal, I don't know where you'd get it cheaper than Hetzner - e.g. about €30 per month for 2×750GB.
If it's just a server though, you might need to worry about backup (I use restic onto Backblaze)
The storage boxes someone else linked to look very good value, but I get the impression you don't get root access?
I've been very happy with https://restic.net/ as a backup solution. You can have it send data (encrypted) to a variety of different third parties or other sources. I set it up in a cron job and very, very occasionally I go and look at it to confirm that it's working.
I'm using restic (https://restic.net/) for my nightly backups. It makes incremental backups and can delete old backups with a self-defined retention period.
It has support for different storage backends. I'm currently using SFTP, which would perfectly work for your use case. My setup also includes a RPi with a USB external drive attached to it where I store the backups.
Dedupes, encrypts before it's stored, incremental backups, tagging, snapshots, and my personal favorite: written in Go–just one binary.
Only performance problem I think it [Restic] still has is when storing a tremendously ridiculous amount of files (like 4TB). Don't know about Borg. I haven't tried storing that much myself, so ymmv.
BorgBackup has some notes about running in the Windows 10 Linux Subsystem or using cygwin
Restic is cross platform and highly regarded by some
I didn't want to overwhelm you (coz we've just met :)
But the next level would be to add encryption and snapshots over rclone
Which brings us to restic
I use it in combination with restic over WireGuard as a backup target for some of the cloud servers I have running. It's not as simple to get running as creating a new SSH key and calling it a day, but it feels cleaner to me to go about backups in this way. I plan on adding some sort of authentication down the line, either via TLS client certificates or HTTP authentication of some nature.
I imagine you could also use it in most other places where S3/cloud object storage would fit (NextCloud, simple file sharing) since it supports the S3 API.
From my perspective, still true. I would stay far from Arq 6.
My biggest concern is that, as far as I can tell, there is no documentation or open-source code about the backup format. Whatever "encryption" they implemented cannot be verified in any way. Right now you have to blindly trust that Arq 6 works, as without sample restore code or documentation there's no evidence it does until you continuously "test" restoring backups.
I'm moving to Backblaze for my "main" Mac and Restic for my Linux machines.
While Restic doesn't have deep OS integration with macOS or Windows (no volume snapshots, system service or graphical UI), it's the best open-source backup tool I've used, surpassing Duplicity in my opinion. Restic also supports backing up to rclone, which is what I used to backup to my Microsoft OneDrive. If you don't mind the CLI and setting up a scheduled job yourself, this is the best tool for backing up to your own storage solution.
For a backup system "for the rest of us", Backblaze simply wins over Arq. Version 6 broke just too many times and has too little support when (not if, when) things go wrong.
I like to think of this setup as sort of a one way valve with two ends of pipe:
"db server" can't talk to "backup staging server", and really knows nothing about it other than having a public ssh key in it's allowed keys list.
So, the overview would be something like this:
Take a look at restic (https://restic.net/) for doing b2 backups from the staging server. It's got hooks for b2, among other cloud storage providers. I'm a big fan, and it allows you to do incremental backups while providing encrypted containers.
Good Luck!
Maybe Restic? I don't know about the specifics of samba or "storing your SMB credentials", but it has a windows client and is compatible with backblaze, it encrypts everything and only uploads the changes instead of reuploading everything.
Rclone doesn't support versioning per se, though it does have this option which I believe can be used to achieve something similar. It should be noted that S3 natively supports versioning, though, so you might want to just use it there. To answer your ransomware question, yes, rclone would overwrite the clean copy with the corrupted copy in that scenario.
Now, to directly address ransomware: in my experience, the best defense you can have against this is by implementing proper least-privilege permissions along with an append-only backup strategy where the client system is unable to modify old backups at all, and a separate retention mechanism is used like automated TTL or manual removal.
I also forgot to mention another open source backup solution that's been on my to-do list for a while now: restic. It's definitely worth a look.
I‘m using restic https://restic.net for that task. It encrypts your backups and you can use different backends. I use a local S3 provider and I backup to a Synology in my parents home. You can use it manually or you can find scripts in the forums to automate it. But - I attached my files as „external storage“ in nextcloud, I do backup those files not the complete nextcloud instance. (I think you can find solutions to make a local backup from your nextcloud somewhere in its documentation, you could then use restic to „backup“ that nextcloud backup).
Kinda fun but you will never be able to reach the level of robustness and performance of restic
I'm all for custom solutions, but not if there's something open-source available that already does it better
Check out restic. Has a variety of backend options, encrypts automatically, handles deduplication and snapshots. can easily be scheduled with cron or windows task manager.
Also, take a look at backblaze b2 - cheaper than s3.
These are my thoughts and levels of exposure, given the topic, but I'm more than welcome to other suggestions from anyone since I'm sure there's other, possibly better(?), ways of doing this.
I don't know of any metadata system that works the way as described (that are free or cheap, anyway), so these are mainly raw storage solutions listed.
--
Possibly quickest way:
Use Google Drive. I think they have pretty good rates for 1TB, I don't remember exactly.
Quick and dirty way:
Upload to Bandcamp (dependent on file size), and publish as private. You can upload other "bonus" materials (pdf, pictures, etc), and the description could serve as a note system too.
The complicated* way, but yields the best ratio for savings to usage (imo):
Use Restic[1] and back up to Backblaze[2]. Backblaze is a storage service that offers a Ridiculous™ amount of space on the cheap, and when coupled with a tool like Restic, it makes storage off site (if an $80 TB drive isn't an option) very tempting.
--
What kind of communication are you going to set up between the computers? If you're using a VPN to have them all on the LAN then copying each backup to the other HDD shouldn't be too hard - You'd just need the target copy location to be \\computername\D$\BackupDir
where the letter is whatever the drive letter of the external is.
If you don't have a method for networking the computers then you'll have to settle for having one computer backup to the single HDD that's connected.
I've chosen restic which I'm using for local backups to an external but later I'll use it for backups direct to cloud storage.
It sounds like maybe restic can do what you're looking for. It does incremental backups via a "snapshots" concept, meaning you can go back to previous versions.
I've personally been very pleased with it.
Cool. Thanks.
The main issue i had/have with Borg is that you need either a plan somewhere like BorgBase (or your own server with ssh/borg), _or_ you need to locally create a repository and then sync this up to $cloud with eg. rclone.
I guess restic might be another option?
Reading the README for timeshift, it makes no mention of fuse ( although it does for other FS types ). Suggests to me that perhaps timeshift can't handle fuse based sources properly.
I assume you're using Linux so if you're somewhat command line based, I would recommend restic. It's very fast ( after the initial backup ) and generally easy to use, and works fine with kbfs ( just tested ) . But if all you want is proper local copy of your kbfs storage then rsync is probably more than efficient.
I'm using restic (https://restic.net/) to backup my Linux-Server.
It should be compatible to windows and mac, too.
Restic splits every file into small chunks and only transfers changed ones.
Therefore a filetransfer is very lightweight.
It's fully encrypted by design.
> The backup repo must be a proper host that can run the server app. I can't borgbackup to s3 or something cheap like that.
You can look into restic which is similar to borgbackup, but can back up straight to object storage.
Have a look at restic (https://restic.net/), seems a promising backup tool. Not used in anger yet, but does support client side encryption and a few destinations like s3, gcp and sftp.
Until I couldn't keep paying for it, spideroak was my favourite for home, used that for years for backup.
Have insync at work, sucks on the per account thing but it is pretty good. There is a free tool on github for google drive (called drive I think) but I don't think it's as good.
I will not be able to talk to it as well as restic’s article but the idea is that there are some natural breaks in many binary files. By looking for them, and I think in Restic’s case also knowing what you already have, you can break your file up better. So shifting all data by 1 byte may not change the bounds. And only require uploading one chunk.
The pros are less space on the backend, less bandwidth, and faster updates (the latter two may matter to consumers) The cons are increased complexity in coding and maybe processing. But, Backblaze computes hashes anyway. Not sure how the content based chunking compares to a pure hash.
I am not saying it’s worth it, especially since the II don’t dedupe across machines, but it is worth at least considering.
Restic is a good option for doing backups.
You can have the data sent to another server, or directly to a cheap object storage provider.
Everything is based on encrypted snapshots, so it's extremely efficient on remote storage costs.
I would recommend restic backup.
They have things like deduplication, snapshots, increment backups, s3 storage, encryption and so on.
I think you can make daily snapshots and than create script, that shrinks daily snapshots to weekly snapshots.
I use a tool called restic to backup my systems to google cloud storage (completely different from their gSuite stuff). Stores all my data in 2 different regions around the US.
For my work machines (we use google cloud for work) I do similar things with the basic gsutil command line tools.
If you use Backblaze's B2 service, it's way cheaper than their service marketed at consumers. It only costs $0.005 per gigabyte per month. With my ~150 GB collection, that costs less than $10 per year. I use it with the Restic backup program.
If you want to backup your photos, use backup software. Having it synced is not a backup.
The way I solve this is to have two sets of shared folders:
"Media" shared between phone, laptop and desktop.
"Media Archive" shared only between desktop and laptop.
Every year I create a new folder in the Archive, and move all the photos from Media into the Archive. This removes the photos from the phone and keeps them on the computers.
I use restic to keep both folders backed up as Syncthing is not backup software.
Glad to hear that rsync.net is working for you :)
I hope it will be useful and interesting to note that rclone[1] is in place, server-side, at rsync.net.
Further, as mentioned above, restic[2] works perfectly with rsync.net as well.
rsync.net is stock-standard OpenSSH so really any SSH/SFTP based tool will work perfectly.
I would recommend doing something like a full blown restic backup and keep it around for a bit.
This will allow you to pull or inspect anything from your current installation. Everything from grub settings installed packages, your home directory etc etc. Furthermore you can run new backups into same restic repo and always have that fallback.
I personally use Restic, you gotta script it yourself and make it a cronjob(*nix) or schedule it as a powershell script(win) but its self explanatory.
Very powerful snapshot retention cycle, ability to self-verify and as for reporting it I just basically pip output into a txtfile and mail it to myself... (Rudimentary i know but it works for me right now)
For me I have my library on a primary external drive that I work from. One Lightroom catalog for all work, and typically let Lightroom import and then I manually rename the folders based on the below.
Import into yearly folders. Then into folders titled <yyyymmdd><nearestairportcode><eventname>
I.e. 2019 > 20190513_pdx_climbbroughtonbluff
Do my key wording, editing and post processing to my satisfaction. Then use a tool called Restic to backup my working drive to a local NAS and also a third copy to the amazon cloud (AWS S3). If I’m feeling particularly on the ball I’ll also take a quarterly backup on a drive and throw that in a fire safe or offsite.
This solution, for me, makes it dead easy to find stuff. And is absolute peace of mind as I can recover RAWs, keywords (xml sidecars), and any post processing I’ve done from more than one place if things ever go sideways.
Dropbox could work for you, but to me I’d lose sleep over being reliant on just two places. Or If your drive goes bad, and Dropbox syncs that bad you’ve lost data in both places potentially.
I'm not concerned with inter-device synchronization, but for strict backup purposes, I find restic to be pretty simple and hassle-free. I wrote a simple bash script which uses restic to back up my system to both local and Backblaze cloud storage simultaneously. It's already saved my ass.
I use a shell script to automatically back up all my important data to both an external hard drive and my Backblaze B2 cloud storage using restic. If you're comfortable using a command line, it's configurable, fast, easy, and cost effective.
I‘m using an open source backup software (restic ), that encrypts and writes to Backblaze B2 storage . This is the same type of storage the Backblaze software uses.
The cost/performance ratio so far has been great. And I like restic. As it encrypts, Backblaze doesn‘t have my data.
Interesting. Restic - https://restic.net/ - looks interesting . . . backup and encryption. That didn't show on my earlier searches. That's two out of three . . . Looking into it, definitely. Bacula looks like it can do it, but looks a bit 'heavy' for the VMs I'm stuck on until I can afford new hardware. Thanks!
Restic backup is what I use. I deploy it with a simple Ansible Role. I also use Minio running as a Docker container on a NAS server as the storage appliance.
> When the choice is Backup Exec or nothing, you write your own. Scoped modestly, this isn't even very difficult. Remember that backups are just copies of files, and copying files isn't hard.
Only it's not files, it's versions of files. CrashPlan did it right, it's too bad Code42 has lost its way.
I keep checking back on Restic but it's not quite there yet.
> For my Linux machines I have a cron job to put tarballs to AWS S3. Every few days I manually delete the old one.
You may want to look at restic if you need a more scalable option :)
That said, I would still recommend using Git for projects so that you get a proper commented history, which backup systems don't have (I think they're best at backing up large amounts of unrelated data).
https://restic.net/ provides everything you want. It supports many backends, and since the store is just regular files you can use any other backup tool to store your restic backup. It's de-duplicated, encrypted, snapshotted, and so on. I've been using it for a year or so and I'm quite happy with it.
Shifts don't matter if the chunking is good. For example, restic uses content based chunking and has no problem deduplicating shifted data. Duplicati, however, uses naive length based chunking, and would fail to deduplicate shifted data.
I've been playing with restic lately and found that on Windows it can backup from UNC paths but you'll be left with a backup that won't restore. Otherwise it's a great tool.
If you haven't attempted to restore from a backup, you don't really have a backup.
Non è proprio semplice da usare ma io da qualche mese uso questo su un disco esterno e mi fa il backup di 100G (portatile di lavoro con diversi repo git e tool) in 1m senza fondere la cpu:
Rsync is the canonical solution to this.
If you only want backups and not necessarily a "sync", then search for deduplicated backup software.
My preferred software is restic. It also does encryption pretty well itself.
For generic backups, I have recently switched to restic. It has encryption, data de-duplication, integrity checks etc. However I'm not sure if it works specifically for minecraft backups as well as rdiff-backup in terms of the diffing/de-duplication.
I'm not sure there's anything I could recommend using only what's in a base BSD install.
You could use a really good (read: uncomfortably long and complex) password to mitigate the weak key derivation, but you'd still be spamming it out on the process list while it encrypts and decrypts, and you could mitigate the lack of authentication (with the same caveat) by using openssl dgst -hmac
, but it feels an awful lot like bashing rocks together when you could just install gpg and use it in symmetric mode (though I'm not even sure how good that is... it can't be that bad, can it?).
(For what it's worth, I'd suggest just using borg or restic - dump/restore also feels like bashing rocks together)
Borg is great, but it's network support is limited to talking to another instance over SSH. rsync.net do an account for that for 2c/GB/mo, and it has the advantage that you can do things like mark some backup accounts as append-only to limit the damage an attacker can do.
For more general cloudy stuff I suggest restic, which has many backends, including native B2 support as well as rclone. Its crypto was good enough for this crypto guy, at least for casual use.
https://restic.net/ works on windows, but you would have to use cmd/Powershell. Also mounting the backup does not work on windows apparently. So to get your files out of the repository you will have to use the restore functionality. But still alot better than copying files in windows explorer ;-)