In addition to the <code>fslint</code> suggestion (which comes with both CLI and GUI), there is also <code>fdupes</code> which is a CLI-only program.
There is also <code>duff</code>, another command line utility for finding duplicate files, but it does not itself perform actions on the duplicate files it finds, leaving it to you to pipe its output to some script or otherwise somehow deal with its output.
All three utilities are available in Fedora's official repos and thus can be easily installed with dnf.
If you can mount the drive so that it shows up as a folder on your machine, then you can use linux commands/tools, for example :
For 2, you can use the following command, just make sure you're in the right folder :
find . -type d -empty -delete
This will start from the current folder "." , search for all the folders "-type d", select all empty ones "-empty" and delete them "-delete".
If you want just to see what folders it would delete but don't actually do it, leave out the "-delete" part, it will simply list the folders it would delete.
For 4 you can apply something similar as for 2 :
find . -type f -size 0 -delete
This will start from the current folder "." , search for all the files "-type f", select all empty ones "-size 0" and delete them "-delete".
As for 2, if you just want to get a list of files and don't actually delete them, leave out the "-delete" part of the command.
For 3, duplicate files you can try findup from the fslint package, it has a GUI and a command line tool too.
Whatever tools you end up trying, i recommend doing a temporary folder with some sample data and use it on that one, to make sure it does what you expect it to do. The last thing you want is to delete the wrong files. I know you can actually recover deleted files from gdrive, but if you delete a lot of them, it's a PITA to try and recover them all.
If on linux, fslint
is available on most Linux distros.
http://www.pixelbeat.org/fslint/
It has both command line and GUI options.
Plenty of ways of cleaning up dupes too (hard links, symlinks, removing a copy, etc)
I like "fslint"
Has both command line and GUI front-ends; and is surprisingly fast on 4TB and millions of files.
fslint macht alles was man so braucht solange man dafür das Dateiformat nicht verstehen muss. Das heißt, die Dateien müssen wirklich komplett identisch sein damit es Duplikate erkennt.
geeqie kann das z.B. für Bilder, dupeGuru kann auch Musik aber hab' ich noch nie ausprobiert denn meine Musiksammlung ist komplett durchorganisiert.
I am not sure if that is possible in digikam. If they are exact duplicates (including the metadata and file size), you can use another software that compares hashes to find and delete them, like FSlint (http://www.pixelbeat.org/fslint/).
I don't know if there are Synology-specific solutions to that and I would be very interested to know, too.
But there are also various solutions depending on the OS you're using. On Linux systems, for example, I use fslint, which will look for file duplicates and help you clean them up.
>Digital cameras make it easy to rack up gigabytes of photo archives. Let's learn how to find duplicates and organize all of your photos without making it your life's work. Before doing anything, please have a good backup of your photo files.
The also mention FSlint was in the Debian repo's so may be in yours. http://www.pixelbeat.org/fslint/
>FSlint is a utility to find and clean various forms of lint on a filesystem. I.E. unwanted or problematic cruft in your files or file names. For example, one form of lint it finds is duplicate files. It has both GUI and command line modes.
.