(I won't talk about exporting to specific sites like Instagram as that has already been mentioned, basically just verify you're inside their required dimensions and file size.)
I use JPEGOptimizer for all my JPEG exports, as it takes away the guessing of selecting the correct compression %. Instead, you select a quality level (low to very high) and it detects the correct level (using jpeg-archive.
It's your lucky day! I just released this a few days ago:
https://github.com/danielgtaylor/jpeg-archive
It uses the standard Structural Similarity (SSIM) algorithm on the luma (Y) channel of the images, similar to how modern video encoders work when set to constant quality factor mode.
The output is similar to JPEGmini, but not entirely equal. I've run it against a thousand test images and it works well. I'm using it to archive my photo collection.
Right now it's commandline-only but I want to add some scripts to make it easy to run over an entire directory. What do you think?
Edit: If you like the project then please star it on GitHub so it gets more attention, and don't be shy opening issues for bugs or feature requests!
If you are willing to do a bit of scripting imagemagick and PNGCrush is probably what you are looking for. It's what I use and works pretty well. You may be interested in jpeg-recompress as well for the non-lossless images.
Once you have a script to process images as you want its as simple as dumping images in a folder, launching the script, and waiting for it to finish processing.
Falls noch Bedarf ein einer optimierten Kompression besteht, kann ich folgendes Kommandozeilen-Programm empfehlen:
https://github.com/danielgtaylor/jpeg-archive
Das Tool jpeg-recompress probiert verschiedene Encoder Parameter und entscheidet dann, welche die beste Kompression bei nahezu gleicher Qualität erreichen.
jpeg-recompress --quality high --strip image.jpg compressed.jpg
Diese Zeile komprimiert die gegebene JPEG-Datei weiter und entfernt die Metadaten. Den Wochenplan konnte ich damit noch von 4,2 MB auf 1,1 MB drücken, ohne dass die Qualität wahrnehmbar drunter gelitten hat.
Ich kann verstehen, wenn das zu übertrieben oder unpassend ist für die Sendeplan Toolchain. Aber vielleicht kann das nochmal woanders nützlich werden. Ebenso kann ich das jedem anderen Benutzer hier empfehlen, der automatisiert seine JPEG-Bilder komprimiert und keine Priorität bei der Geschwindigkeit setzt.
> You are archiving a valuable footage.
True! I keep the originals. :) My scheme is to keep all media neatly organized in the ~/Pictures
directory, for example:
Etc etc. I want to keep them all in one place primarily so they are easier to back up via several means (rsync, btrfs snapshots, S3 sync, etc) as well has having local access to them at all times. I process the images with jpeg-archive, which keeps them reasonably sized with no loss of quality, but the videos are too big—I archive those to S3 reduced redundancy storage and delete the local copy. Using VP9 and AV1 I keep a local copy that is good enough, but I can always go back to the originals if needed. I never re-encode!
Cheers!
For JPEGs I would say don't convert them, but run them through something like jpeg-archive and call it a day:
$ find . -name "*.jpg" | chrt -b 0 parallel --no-notice "jpeg-recompress ${@:--q high} {} {}"
Even at the high
setting you will trim a few megabytes from each JPEG with no noticeable loss in quality (using SSIM or other metrics).
Yep, use jpeg-archive. Every time I gather photos from a vacation or event I process them with jpeg-recompress
, via NodeJS ladon
(similar to GNU parallel) to speed it up:
$ chrt -b 0 ladon "*.JPG" -- jpeg-recompress FULLPATH FULLPATH
The defaults for jpeg-recompress
are reasonable for long-term archival of photos. It often trims several megabytes off each picture with no noticeable loss of quality.
I did, unfortunately, read that drivel to the end. Within context, it's clear she was referring to the task runner.
> Often when I speak about these issues I hear “but these are not my users” from an indignant developer. “I don’t have users like that” he will angrily tell me, apparently knowing each and every one of the users that visit his React-powered website.
> “All my users are rich young healthy people, living in metropolitan areas with excellent network coverage” they tell me, gulping furiously they do so.
> “These. Are. Not. My Users!”
...but to address your point:
> Frameworks and task runners largely account for the edge cases of JavaScript.
I agree that frameworks are largely a JavaScript concern. But even in the realm of CSS + HTML, if your concern is accessibility and cross-compatibility, it benefits you to use a framework like Bootstrap. They've thought of all the edge-cases, and they put in a lot of work to ensure their code and examples are widely compatible.
As for task runners, if you're using them solely for JavaScript, you're doing it wrong. Well, you're doing it right, but there's so much more you could do with them. Here's a tool to add CSS vendor prefixes:
You can also use task runners to watch for new image files and create compressed versions of them with something like jpeg-archive. Pretty much any repetitive optimization task, you can probably automate somehow. Relying on task runners during development always results in benefits for the end-user and bears no additional load on the front-end by definition.
Check out jpeg-recompress
from my project, which compresses images based on visual quality perception and lets you automate the quality selection, as well as using progressive mode:
https://github.com/danielgtaylor/jpeg-archive#jpeg-recompress
Precompiled binaries for Mac/Windows/Linux are here: