Your solution is not optimal, it uses 100% of my chromes thread on a 4.2GHz CPU and has like 2 FPS when scrolling. I think you have to come up with another solution because this is going to consume a lot of battery on mobile/tablet devices.
From WebKit Wiki: > Sometimes it's tempting to use webkit's drawing features, like -webkit-gradient, when it's not actually necessary - maintaining images and dealing with Photoshop and drawing tools can be a hassle. However, using CSS for those tasks moves that hassle from the designer's computer to the target's CPU. Gradients, shadows, and other decorations in CSS should be used only when necessary (e.g. when the shape is dynamic based on the content) - otherwise, static images are always faster. On very low-end platforms, it's even advised to use static images for some of the text if possible.
EDIT: To add more value to my response, could you perhaps instead dynamically add the blurred overlay to an image using something like ImageMagick or if your images are static, just add the blur directly onto the image?
The 2.9 nightlies have some better resamplers by Nicolas Robidoux (somewhat obscure, self designed, but decent nonetheless).
Otherwise I use imagemagick with gamma correction or imageworsener.
Gimp 2.8 and earlier does something hugely incorrect and overcomplicated, repeatedly shrinking by 1/2 until it's small enough to finish with an upsizing filter.
I assume you're on linux or OSX and know how to program a little bit. I also assume you want to do this lots of times, not just once.
1) Use imagemagick to crop out the slice of the image you need. It's a command-line tool so you can write a script to invoke it. http://www.imagemagick.org
2) Pass the slice into the OCR API here using the curl or wget command. http://www.datasciencetoolkit.org/
3) Parse the text using your favorite language (Perl, AWK, Java) to extract the timestamp
--PhD
I'm going to assume that this guy has never had to resize images before. An easy way to fix the "big files" problem is by using ImageMagick. It's as easy as opening a command prompt or terminal and running a quick command.
On Ubuntu for example you'd type
sudo apt-get install imagemagick mogrify -resize 4000x400 your_image_file_name.jpg (or .gif or.png...)
That's it. If the guy can't quickly do that then he's lazy. ImageMagick is available for Windows, OSX, and *nix.
FLIF! The design is really clever, it outperforms every existing image format's compression (incl. BPG and WebP), and ~~it's on track to being included in browsers~~ (EDIT: there's feature requests, but it's still under discussion). It even has a lossy compression mode with no generation loss.
> TinyPNG
If you don't want to use their website or API, or you want to do compression without going over the network, they use pngquant internally so you can also just use the tool directly. https://pngquant.org/
In this case, you are thankful to the Free Software community, but not necessarily Linux. You're thankful for the huge number of free userland packages that give everyone the ability to do things that only very high dollar proprietary packages could do in the past.
Particularly, you're thankful for ImageMagick, which is available for all major platforms, including Windows.
Another option is OptiPNG. Years ago I did a comparison and found it to be amongst the best (when given the right command line options, such as -o7.) I'm not sure how that has changed with time, but it is worth checking out.
http://optipng.sourceforge.net/
EDIT: I didn't realize that TinyPNG was lossy when writing this. OptiPNG does lossless optimization. (Though newer version appear to have more options.)
Seconding this, anyone who needs to do similar operations on large number of images is doing themselves a disservice if they don't get familiar with all the stuff you can do in ImageMagick. Here's some examples to check out.
ImageMagick's convert
can do that. You are somewhat likely to already have it installed.
For example:
convert image.jpg -resize 30% smaller_image.jpg
or
convert image.jpg -resize 640x480 smaller_image.jpg
If you want to overwrite the image, use the mogrify
command from the same package:
mogrify image.jpg -resize 30%
You can pass mogrify
as many files as you want. If you want to batch-process while preserving the originals, you may want to write a simple for
loop:
for image in *.jpg; do convert $image -resize 30% smaller_${image} done
EDIT: More examples.
Wie mir grade auffällt ist der Plan mit 6MB doch recht wuchtig,
ein Image Optimizer/Image Compressor wie RIOT wirkt da echte Wunder.
Hier eine 1,5MB Große mit RIOT auf 80% optimierte Version des Wochenplanes als Beispiel: http://i.imgur.com/BBULmkS.jpg
P.S.
Nichtsdestotrotz freue ich mich sehr das wir überhaut einen haben
LG
.txt
I was ready to put on my pedant glasses and lecture you about PNG always being lossless...
...but what do you know, there indeed are lossy compressors for PNG. TIL - not something many people will want, but as a web dev, bookmarked for the next time a client wishes to have a huge alpha-transparent image on a web page, despite me objecting it strongly. Still, 99% percent of the time when discussing PNGs, we are talking about lossless compression. Which I think is what the GP meant.
While I disagree that this has NOTHING to do with developing, it still advertises as "CL tools for developers" and ends up being "networking CL tools for developers", which is a very small subset of what it advertised for.
Sure technically the smaller set is part of the bigger set, but I'm not a big fan of vague titles.
Personally, one useful command line tool that has helped me many times is ImageMagick. Insanely powerful tool for any sort of media conversion and editing and much more.
Something everyone already knows. Re-saving jpegs reduces quality.
This video is more a promotional piece for FLIF and it's been posted here a few times already. https://www.reddit.com/r/photography/comments/4el82b/flif_the_next_gen_image_format_demonstrates_its/
One of the super nice advantages is the fact that you can decode "on the fly" if I understand it correctly. So you only need one file, not a full lossless file and a small lossy thumbnail, etc.
"The preview is the beginning of the file" (last paragraph)
imagemagic, all the way. It is exactly what you are looking for. In the command line, you type
convert file1.png file2.png file3.png output.pdf
I'm sure somebody's written a GUI frontend to it anyway that will make it easy if you aren't familiar with the command line. It's open source and cross platform.
If SSR and code splitting aren't helping there's not that much that can be done in terms of performance. Gatsby really only does the same job as SSR, except renders every possible page (& page metadata) on build time which would improve the initial paint time.
What do you really mean by load time? Are you talking about the time-to-interactive or the initial paint? Gatsby and SSR solutions will really only help you get the initial paint time as low as possible but for better time-to-interactive you'll need to look at your JS execution path. There's many aspects that could be causing a slow application, for example:
Gifsicle has Windows binaries available, as does imagemagick. Does his telling people how he did it somehow stop people from using online tools? All it does is inform people of a potential method of doing things. I use imagemagick
on Windows for things as simple as scaling images, or splitting a GIF into frames, not because there aren't online tools to do it, but because I know I can just compose a specific command with exactly what I want, run it, and get the output nearly instantly.
You could make a script that runs ImageMagick to create a 1px image with a color:
convert -size 10x6 xc:#121630 -fill black -scale 1x1 wall.png
You could then set that as the desktop background with this tool.
People are confusing compression and resizing. In the past, if opting for the unlimited option, images were resized down to a max size of 2048px on the longest edge. They were ALSO compressed subtly to save storage space in the cloud.
Now they are just compressed, or optimized if they are <16MP. You can still have a beautiful, detailed photo compressed and look great and be printable. For example look at this service http://www.jpegmini.com/ - it compresses the image without resizing it and retains the visual integrity of the image, but saves considerable space.
If I disable cache and refresh, on my decent internet connection, the site is still downloading assets 14 seconds later. Please don't serve poorly compressed 600x800 images (why are half of these pngs?) that you are going to display at a fraction of their original size. Use squoosh or similar to compress images. Why is the Ubisoft logo almost 1mb?
> With a GIF you need to have 100% of the information for each frame.
GIF actually utilizes Frame Optimization to also only save changes between frames, so that's not really true - it's just really useless methode for video-like full motion clips, just like the format as a whole is.
First try. Untested so far.
#!/usr/bin/env bash
# nullglob - make globs expand to nothing instead of themselves, when there are no matching files # nocaseglob - makes globs case insensitive; *.mp3 will match foo.MP3 shopt -s nullglob nocaseglob
while IFS= read -rd '' dir; do mp3s=( "$dir"/*.mp3 ) n=${#mp3s[@]} if (( n == 0 )); then continue elif [[ ! -e $dir/Folder.jpg ]]; then printf '# No Folder.jpg\n%s\n' "$dir" else read -r width height < <(identify -format "%w %h" "$dir/Folder.jpg") if (( width != height )); then printf '# Wrong aspect (%dx%d)\n%s\n' "$width" "$height" "$dir" elif (( width < 500 )); then printf '# Low res (%dx%d)\n%s\n' "$width" "$height" "$dir" fi fi done < <(find . -type d -print0)
The identify
command used here is part of ImageMagick. It's usually not installed by default, but typically available in your OS's package manager.
> Where does one start to learn about optimization?
Specially for ImageMagick, I recommend this and this.
> Is the input video only or will it take a .gif and optimize that?
It converts videos to GIFs. It could also read GIFs directly, but that will mess up the timing due to the way how FFMpeg handles GIFs, so I use VirtualDub to convert GIFs to AVI instead.
> How would one even use the script you provided?
That's where the problems start. I just added the link so the comment isn't completely obscure and useless. Just to check the file size, I did this:
makegif.pl -q 5 GSqXDdK.avi GSqXDdK_2.gif
And the problem is that you'll need a Perl environment, FFmpeg, Gifsicle and ImageMagick to run the script. From my experiences, most here don't even know how to use cmd.exe, so I skipped writing a tutorial on how to download and install every single of these programs and that explains the basics of the command line interface.
If you're looking for something small and free, try image magick. It's a set of opensource command line image editing tools, stuff like converting to a different format, resizing, rotating, changing gamma or other levels and a lot of other features, all available in batch mode. Refer to the format list if you need to convert from some specific format first.
That said, I'd stay away from it if you're intending to convert raw images, though it handles most formats, and you can pass it several correction parameters like gamma, the images tend to look flat in my opinion compared to the output of other tools.
> In which case it will be noticeably worse than JPEG
No, especially not under generational recompression. In fact that's a big selling point of FLIF:
> One of the advantages of using a lossless format in a lossy way (as opposed to using a lossy format), is that generation loss is not an issue. Of course the information that is lost, stays lost, but no matter how many times you save a FLIF file, it will not get any additional loss from a decode-encode cycle.
from http://flif.info/lossy.html so "lossy" FLIF should not degrade when recompressed to an identical or higher quality.
> in any case is not what is being presented in this video
It is. If you look at the finger on the apple or at the background/skin interface near the singer's right brow (close to the spotlight) there is noticeable quality degradation wrt the initial frame (at least it's noticeable in 1080 fullscreened)
That's certainly motivation to push it upstream if you developed a fix in-house. However, sometimes new features (and a bit less often, fixes) are very high-value, to the point where handing them out for free seems like bad business sense.
Suppose I am a developer at Imgur. Like other image hosting sites with small image processing features (e.g. Photobucket), Imgur uses the open source Imagemagick toolkit for manipulating images. After a couple months of hard work, my team modified Imagemagick to let it work directly with PSD (Photoshop) files. This is a huge improvement, and would catapult open source image processing forward if pushed upstream, but... a couple months of three developers cost Imgur $50000-$60000. It's not a huge amount, but certainly nothing to sneeze at. Do we really want to deliver this on a silver platter to our competitors just out of the goodness of our hearts? And practically speaking, why should other non-competitor companies get to benefit from our investment at no cost (not just the immediate one into this feature, but also from the talent we hired)? This upgrade has tangible value. Why throw it to the wind?
The motivation to keep changes in-house goes beyond that when you consider trade secrets, patents, licensing/liability/warranty issues, and more fun stuff. The main motivations to push something upstream are somewhat easier maintenance (as you pointed out), good PR, and possible long-term benefits from people being super-familiar with your code (see: the Unreal Engine).
Open source is a nice concept, has had some very nice outcomes, and I believe should continue to be pushed to become more popular. However, the current state of software engineering strongly encourages not actively contributing.
One interesting caveat to note, if FLIF ever catches on and gets wide browser support, then CSS resizing will actually be the best option available. The way FLIF works is that it only downloads as much of the image is required to display at the current resolution. So pointing at giant image won't have any downside, and using the same image for all screen resolutions will work without any downsides.
But FLIF just came out of Beta. So it will probably be a 1-3 years before we see browsers willing to adopt it.
That page is comparing the lossless FLIF to a lossy JPG. FLIF has a lossy mode too: http://flif.info/lossy.html
It's pretty close to JPG. They concede that JPG is slightly better at really small sizes, you can compare yourself here: http://wyohknott.github.io/image-formats-comparison/#swallowtail&jpg=s&flif=s
Wow! 20 MB PNGs is a lot.
That's good to hear.
I'm curious; will you be losslessly compressing the PNGs using something like OptiPNG to save file space? That'd be pretty rad.
Quick heads up on the network performance side of things: Get Chrome's page speed insights extension if you haven't already.
Things you could look at:
It would be super helpful to know exactly what your script does/what libraries/APIs it uses (and why you can't just run it on node right out of the box). As someone who has used phantomjs a lot, I would strongly recommend against it unless there's absolutely no other way (at which point it's great).
You mentioned image scrambling - can you do the scrambling with imagemagic? If so, you can use one of the node imagemagic modules and likely have an extremely easy and performant solution. Without more details, that's the best I can offer.
Imagemagick is probably the most feature-complete graphics toolkit out there. It's also small, quick and very, very free. If you don't mind the command-line interface, converting your photos to a GIF shouldn't take more than 20 seconds.
The specific command would probably be "convert *.jpg -delay 5 -size 320x200 output.gif", but here's the complete animation manual.
Been trying to work on a Rust port of a decoder for FLIF. Unfortunately, the specification isn't entirely done and the C++ code isn't well documented, so it's quite the challenge to understand it all, especially with my lack of experience in the area. I'm currently up to the point of reading transformations.
Nice! It helps out a lot with the user experience. Speaking of... your page load time was 17 seconds on a gigabit connection. That's not good.
There are a lot of scripts that could probably be combined or eliminated to reduce page size.
I see you have something that is automatically sizing the images out (I tried to grab your uploads from this month as an example, and it tried to spit out every size imaginable), but if you run your images through https://tinyjpg.com it will save you a ton of bandwidth. I scraped the images off of just this page alone and it came out to 3 MB. I ran those images through tinyjpg and it compressed it down to 1.5 MB. 1.5 MB doesn't seem like a whole lot until you have 500 people hitting your site at the same time and you're losing people from not sticking around because the site load time is so high. I compressed the batch of files from the page scrape here: http://sb.eddie-muller.com/nwk/
You spend additional time evaluating the contents of the file with different algorithms to produce a pixel-for-pixel identical image at a much smaller filesize (lossless compression).
Your image displays the same but can be half the size. Since most images on the web are uploaded once and sit forever, putting in the additional time up front to compress them better means every time they are seen they download faster and save you on bandwidth. Particularly important for mobile devices, that have lower latency, slower speeds, monthly bandwidth caps, and are more expensive per bit.
Simply scanning your existing server for pngs and then recompressing them all can be a huge savings.
If you're on a mac, toss one of your photoshop saved pngs into ImageOptim.
For Windows, try PngGauntlet.
Those are both easy to use applications. Neither one will get you maximum compression on their own, you can follow the instructions on that wiki for that if you want. I'm in the process of research, design, and planning for a cross-platform application that will produce the best possible compression every time.
As DaftPump said, use ImageMagick. Here is how to do what you want.
Once installed, just run convert dragon.gif -resize 50% half_dragon.gif
. Change the filenames first, of course.
You can even nest it inside a for file in *; do ...; done
loop if you want batch processing.
I haven't tried this but it shouldn't be that difficult to do. If I were to roll my own I would try to do:
1) Take a screenshot with chrome (" google-chrome --headless --disable-gpu --screenshot http://www.example.com/")
2) Use imagemagick to compare the screenshot from the previous captured image (http://www.imagemagick.org/Usage/compare/)
3) Use imagemagick to count the number of highlighted pixels to determine how big the change is. The link above seems to calculate statistics that you can parse and optionally apply a threshold to only notify you when the nr of changes is above a certain value.
4) Send a notification via any kind of messaging.
Then plop that into a script and register that as a cron job with crontab -e and you should be set.
The word here is resampling, not antialiasing. We try to reduce the number of pixels in the raster image while keeping all the small details intact. There are more than a dozen algorithms to do this, each producing a different output in terms of blurring, sharpening, moiré, speed, etc. No algorithm is perfect: one is good for photos, another works best with line drawings.
Edge probably uses some sharpening filter, such as Catmull-Rom or Lanczos3, which is not particularly good for logos: thin lines become aliased and grainy. Edge is also using GPU to render the page; it's faster but also has limitations on the complexity of image processing.
Possible confusion. I got mixed up reading this, hopefully this helps.
Your command line example isn't using the gimp add-on. It's a separate tool http://www.imagemagick.org/script/index.php
So these are two separate free ways to go about it. Super baller!
Edit: Another bonus! Using this method produces images that have more detail, it looks like it removes noise so the image looks super crisp.
For next time, grab ImageMagick and do in that folder:
mogrify -resize 2048x *.hdr
(replacing 2048 with your desired horizontal resolution)
That will replace the original files, so you might want to set the -path flag to put them into a different folder instead.
ITYM "As long as they don't overlap", which, unfortunately, the NASA images do, quite a lot. Properly stitching those together in ImageMagick is a wee bit more complicated than append
or <code>montage</code>. Re: www.imagemagick.org/Usage/scripts/overlap.
I bet you could get a pretty good result from imagemagick's composite function. http://www.imagemagick.org/script/composite.php
Assuming the camera does not move. You take a background frame for reference, and subtract it from each frame containing the ball. Alternatively, if you have a solid color background, you could set that color (range) to be "transparent" in the frames, then stack and merge them.
Either approach is going to be easier than manually masking the ball in every frame.
For those of you looking to use these massive files but unable to open them even in gimp, here's how. EDIT: As socialery points out imagemagick is available cross-platform, the following instructions are just for linux.
convert YOUR_FILENAME_HERE.tga OUTPUT_FILENAME.png
Enjoy! I was able to convert my ~1.9G image down to ~120MB
For bonus points throw in -resize 50% or any number of additional flags, run man convert
for more.
Install the skin first:
This will take awhile.
Then install:
And enable them in settings. Once that is done you will be able to activate the "Weather" widget.
This was one of the bugs in the original Eminence 2 MOD release that is fixed in this version.
Also all the images have been jpegoptim & OptiPNG.
This reduced the size of the skin from 29.5 MB to 18.9 MB.
As other users helpfully pointed out, tools like tinyjpg.com can help.
The HTML and styling used can also makes some difference.
Hope this helps.
There are also programs that can "destroy" a PNG image to make it better fit the lossless png compression algorithm. Essentially generating a lossy png. One such tool is https://pngquant.org/
But i have to say, the artefacts in the images looks more like it comes from jpg.
As others suggested, I would go for WebP.
I'm here to suggest a tool called squoosh from Google Chrome Labs.
It's actually an image compression tool.
This is funny, at work I'm working on doing a mass image optimzation for our public facing web apps/web pages.
Right now im using Image Magick, and just doing a mass loss-less optimization - http://www.imagemagick.org/script/index.php
Just another option:
http://www.imagemagick.org/Usage/montage/
Just set up a file watcher on your image folder and have it run montage ./path/to/folder/* -geometry +0+0 mont.png
every time you modify a file in that folder
If they are all going to be cropped exactly the same I would use ImageMagick tools. If the images aren't centered the same and you are looking for a program that can find the edges, then ImageMagick might be able to do it, but it can be easily confused and pick the wrong coordinates.
They could also use imagemagick
After installing it with their package manager, it's as easy as running this command from the command line in the current directory:
convert *.png output.gif
This calls the convert
command, which is part of the imagemagick suite and is used to modify images and save the result in a separate file. (see man convert
or the online IM docs) The *.png
part selects all png files in the current directory as input (the png can be changed to jpg). The output file will be called output.gif
You can also add the flag -delay 10
to add a tiny delay between each frame of the gif. A larger number will give a larger delay. Finally you can add -layers optimize
to optimize the gif for a lower file size at the expense of less colors and lower quality, which is why you might just want to resize the final gif or the individual screenshots (convert -resize output.gif output_small.gif
) or upload the final gif to gfycat or even look into gifsicle
which is a separate utility which is much better at optimizing gifs.
ImageMagick's convert command may help. You could write a bash script that does the following:
I have never used ImageMagick, but let me know if it works for you.
Here's the website:
You can find objective measurements there. Also, Jon Sneyers seems to be the principal maintainer and perhaps researcher.
First of all, be selective of which photos you want to keep. It's of no use to keep everything. Delete all out of focus/bad photos.
Once you've done that simply apply the 3-2-1 backup strategy.
"The 3-2-1 backup strategy simply states that you should have 3 copies of your data on two different media with one copy off-site for disaster recovery."
So basically backup all your photos to an external/internal hdd/sdd and also back everything up in the cloud. Preferably you make an extra backup to another external HDD just to be sure.
Since most of your photos have been taken with your phone the quality won't be DSLR/mirrorless quality so it's perfectly fine to use a site like https://tinyjpg.com/ to decrease the filesizes.
As for your videos, upload everything to Youtube. Set your videos to private and you have unlimited backup.
One small milkshake, please.
I just put through a commit with these changes. I ran the images through optiPNG first to save space (and revisions that increase the size of the offline download).
Chica does look more interesting now. I think that this will increase her appeal to those whom it's meant to appeal.
The funny thing is that if you apply modern video coding advances to JPEG (without making a new format!), you'd be surprised at the quality increase you can make.
It's just that everyone is using that single free JPEG library made in the early 90's...still using 90's technology.
Here's a few guys trying to commercialize that idea: http://www.jpegmini.com/
Download http://luci.criosweb.ro/riot/ This program can compress images very very good whilst keeping the best image quality.
And I wouldn't shoot in basic jpeg if you want to edit them later in photoshop. Take the pictures in better quality so you have more data to work with whilst editing. When that's done, use riot to resize and reduce file size in the end.
The program works on either one file at the time or in batch mode.
You made this on Photoshop? Wow, I can imagine how tedious it was to resize each image manually. For things like this I recommend the ImageMagick suite, it has a tool exactly for this purpose:
montage -geometry +0 -resize 100x100^ -extent 100x100 image1.jpg image2.jpg ... output.jpg
The command above resizes each image to 100x100 (keeping aspect ratio) and stitches them together in a grid, writing the result as output.jpg
.
Here is an example using the covers from Alstroemeria Records and ALiCE EMOTiON from my music library.
Use Imagemagick on the command line. It should be very easy to whip up a script of some sort to do this automatically on all the photos in a directory. If you want some help I can help you.
Something like
FOR /R %cd% %%IMAGEFILE IN (*.JPG) do mogrify.exe -auto-level -despeckle %IMAGEFILE
(warning, that will modify the images in-place, so make sure you do it on a copy of the folder if you want to keep the originals)
This is oddly worded but Imagemagick can be used for this and is very quick. It has quality and resolution options. Just write a script like this suggests:
http://www.imagemagick.org/discourse-server/viewtopic.php?t=14587
You can just write a script using something like imagemagick's identify.
Something like (typing on phone, can't test):
find . -iname "*.jpg" -exec sh -c 'identify {} | grep 1920x1080 > /dev/null 2>&1 && echo {}' \;
should print the names of all files with size 1920x1080.
What I would probably do is first convert each jpg into pdf and then glue all the pdfs into one pdf.
To convert jpg -> pdf, download ImageMagick, and then from the command line run something like:
for f in /path/to/images/*.jpg; do convert "$f" $(basename -s .jpg "$f").pdf done
(This is what I would do on Linux, at least. basename
is a GNU core utility.)
Now that you have a bunch of pdfs, you can combine them with pdfjam
, which comes with TeX Live.
$ cd /path/to/images $ pdfjam -o myoutput.pdf -- *.pdf
It depends on what you mean by similar - if you mean essentially as a grid of values then imagemagick can do that.
http://www.imagemagick.org/script/compare.php
If expect images with small shifts or scales or similar content to be similar this kind of approach will probably be disappointing. Other approaches include taking a perceptual hash of the image, sort of a small fingerprint that will be similar to similar images, a quick google shows an implementation on this site (I haven't tried it):
Otherwise there is image registration, trying to match up the rotation/scale/or other configuration of an image with another which could lead to a similarity measure. Medical image analysis is active area for this sort of thing.
And I guess there is also automatic image annotation where you attempt to label is in a label at a human language level, that could lead to a measure of similarity as well. For instances you might want to say a picture of a dog is similar to another picture of a dog, but they could be in very different poses with different colours that wouldn't be close on a purely image based system.
If you can handle a little Linux scripting, there is a "convert" utility (which invokes ImageMagick) to do exactly this.
edit: ImageMagick has Mac & Windows versions too, but I haven't used those.
Back in the old days of scanning film, the scanner would typically tack on a tiny amount of sharpening.
Most visual effects pipelines will include some component of sharpening to compensate for things like the flattening of plates (compensating for lens distortion), and stabilization, which add small amounts of blurring as a necessary byproduct of the operation (unless you're working in some wacko pipeline that turns 2k input images into 4k's for production work - but nobody does that.
Lest it's not obvious, many TV's actually have sharpening in their core function, which you have to drill down to and turn off. A TV sharpening every signal it receives is, needless to say, fucking stupid - - but someone at Sharp (or fill in the blank) wants their expensive engineers to have a visible effect on the display, I guess? So if you've got a new TV, you might want to look for and turn off that default sharpening.
As for "correct" sharpening, there's no such thing as empirically correct sharpening. You're distorting the contrast ratios of adjacent pixels in an arbitrary way, according to what looks good. There ARE good arguments for certain kinds of sharpening in certain cases - lanczos filter & energy preservation, for instance; or the algorithms used to pull accurate color and intensity when an image is being de-bayered.
a couple fun discussions: http://www.imagemagick.org/Usage/resize/ http://en.wikipedia.org/wiki/Bayer_filter
Each camera/sensor maker will have their own special sauce for pulling the best image out of their constellation of pixel sensors (RGGB in whatever configuration.) - - so in that sense, sure, just about any digital camera is likely to yield some component of sharpening, but the semantics of whether this is "post" or not are debatable.
Whoops - started to write a book. Signing off now ;)
I haven't used WebGL, but here's my understanding from a general graphics programming standpoint.
The graphics card doesn't know how to read a JPEG. The texture has to be converted into a bitmap before it is loaded into video memory. Therefore the image format you use only affects download times and RAM usage.
I would say use whatever you can get away with. If your texture has gradients and photographic-like bits, use JPEG, and try various compression levels until you find a quality loss amount that you're comfortable with. If you're compressing line art or text, or anything with large areas of flat color, use PNG. And always run it through ImageOptim (Mac) or OptiPNG.
As for the models, OpenGL is format agnostic so just use whatever format supports the features you need.
made by http://www.jpegmini.com/ - company for image compression :D :D
EDIT: It goes deeper! :D jpegmini.com webpage is associated (maybe it's the same company?) with Beamr.com, a video compression company. I guess those companies can be real but them doing Silicon Valley page? Great joke! :D
It's stealth spam, this account is an hour old ... first comment is on a post from the JPEGMini blog about how "well written" the blog post is.
Dror Gill is also the CTO of JPEGMini ... would be a hell of a coincidence if this "Gill" wasn't related, IMO.
Take the image and first resize it to something reasonable---the images you have are probably 4000px wide, and depending on your use, could probably get away with being 1600px wide.
Better yet, you can use media queries to deliver a different background image (different images with different widths).
Take these smaller images and run them through a JPEG minfier and then download and use the resulting images.
This should suffice for a beginner, but there are more elegant tactics once you get more experience. If you are comfortable with it, make sure GZIP is enabled on your web server, as it will save a lot of bandwidth.
I use pngquant myself. There's the PNGoo batch program that works really well. I've been able to compress my files a bit more than just 60% average. Of course it depends on the amount of different colors in your image. But I've had images range between 15-30% of the original files.
There is squoosh.app that can already compress with jpeg xl using webassembly, I'm sure there is a decoder as well, yeah, found it in their github repo.
It's a very simple site, so there's not much to comment on, but here are a few points...
If you're doing batch image processing, I strongly recommend using a command line API (or library) like ImageMagick. COM is for automating applications, for sure, but this is not a good use of either COM or Photoshop in this case. A free command line utility can do all of this (and way more quickly and efficiently, and you don't have to have an instance of photoshop running).
Probably the problem is with mvg format http://www.imagemagick.org/script/magick-vector-graphics.php
You can include files in it and after processing contents of this files will be visible in resulting image
Another problem is that image format not always correctly checked, so you can upload mvg file as png and remote server will process it.
convert.exe in imagemagick makes this so painless:
http://www.imagemagick.org/script/binary-releases.php#windows
convert.exe png1.png png2.png png3.png output.pdf
It supports wildcards, so you could use a batch for loop:
And your're done. Just output the pdf in the root as foldername.pdf
I just tried this (with x0109x and variations) and it did not work. Images enciphered with ImageMagick also don't look like the imgur link. See here for examples: http://www.imagemagick.org/Usage/transform/#encipher
Isn't ImageMagick the standard for stuff like this? Basically, you write some ImageMagick optimisation functions that are triggered once an image has been uploaded to the server.
You can also try using PIL if you want to do it in Python.
Well there are various image manipulation libraries available but it depends on the programming language you want to use and specifically what you mean by "edit photos".
ImageMagick is a good tool for automated image processing tasks, and is also available as a library that you can control from other programming languages such as C++.
A popular choice for Python programmers is PIL, which is perhaps more accessible than ImageMagick while still being excellent at the most common tasks.
I looked around and I couldn't quickly find an online C++ compiler that would allow the program to make a file and the user to get it quickly.
I scrolled down far enough and one answer is in JavaScript, fyi.
Edit:
I ran it on my computer and found out I needed to install and use Image Magick to convert from ppm to png:
http://www.imagemagick.org/index.php
Thanks so much for your time, really appreciate it man! Hmm indeed the font is lowres there will fix that.
Thanks for the PNG tip! When I deal with PNGs I usually use ImageMagick gives fine control over how many bits-per-pixel and also dithering method, on Androids you want textures to be in a format supported by the graphics chip in hardware in most of the cases that is the ETC format which is a little pain cause it doesn't support alpha( a separate one channel aplha mask is used to blend the base texture in the shader...)
Be sure to use the below coupon code GGEMREEG wich gives you some free gold!
Imagemagick is a open source, lightweight image editor with a trim command. It's not as intuitive or simple as photoshops or even other free apps, but this is the only one I know of.
ImageMagick is a generic image read / write / process / convert core C library with wrappers to make it a command line tool or a bolt on library for a number of languages; C, C++, Python, et al.
You might enjoy reading the core read / write routines for the various BMP formats.
It's a simple enough format, it's advantage in a Windows environment is that it's a single system call to xfer .bmp's [memory] <-> [file]. Its disadvantage is that all the world is not Windows.
[](/c00) Personally, I most of the time use imagemagick when I want to do something very simple (like making an animated GIF from a few frames from an episode). Otherwise, GIMP works pretty well, and if you want to make something a little more advanced, the GAP plugin is extremely useful. It's a bit hard to use at first, but once you get used to it, it works nicely!
FLIF would be better than PNG or zip. Zip does around 25-30 precent ratio, PNG does around 50 percent, and FLIF does around 70 precent.
E: if anyone is curious, I compared FLIF to PNG and JPG. It was on average 50% the size of the PNGs but was also smaller than the JPGs with images that had more solid colors or with in just one or two megabytes with images like photos.
This was with UGUI FLIF, based on an early, pre 1.0 version of the encoder from several years ago. As a result the time for encoding was generally around 1.75 times longer than encoding.
Since then there has work put in to make faster at encoding (and decoding). I'm also betting but can not be certain that if the gui was using a recent version of the encoder, the images would also be smaller.
PNG has been used for images that need transparency. This format though has the benefits of PNG plus an optional lossy mode that prevents generational loss and higher compression making it more attractive than jpeg(which is why PNG never replaced jpeg). Furthermore the way the interlacing is handled it never encodes the same pixel twice so it improves the speed of rendering an image and it's compression. Check out the example page or this video comparing the rendering and loading to PNG. This really is a game changer.
All files produced by it can be opened by it. But it's from October. The next version is scheduled to be released after FLIF format is finalized. It may support converting older versions into newer stable ones if this feature is requested. It would be a trivial task. Just include the old converter with the new one. Convert from the old flif to pam, then from pam to stable flif behind the scenes.
How much further? One of the things that drives me nuts about Google Page Speed Insights is that it's a diagnostic tool pretending to be a grading tool. I've seen it dock points from a site for "uncompressed" images that are only a couple of KB (or a literal handful of bytes) in difference from whatever compression algorithm it's calculating with.
GPSI has no since of scale of issues (often meaning they're non-issues) or trade-offs (e.g. an important image that you need to be higher quality even if that means an extra tenth of a second in average load time). So don't go out of your way chasing a meaningless GPSI score if there's not actually a problem.
But to answer your actual question, I love TinyPNG (has a WordPress plugin, works for JPEGs too) for the vast majority of my images. And JPEGmini is the best I've found for optimizing big hero image photos with no visible quality loss; seriously, it's some kind of witchcraft.
You should really optimize the photo of yourself.
It's nearly 2.5 megabytes, there's really no reason for it to be more than 20-30kb.
Reduce the size in pixels (currently 4272x2848 and only displayinng at 310x290) and run it though https://tinyjpg.com/ or optimize it locally yourself.
Pra que tiver problema com o tamanho (em kbs) da imagem: https://tinyjpg.com/
Funciona com png também, mas não é um formato recomendado pra fotos.
Na verdade eu recomendo a moderação usar o tinyjpg ao invés de quem mandar a imagem. O imgur já reencoda a imagem e pode ser que perca muita qualidade.
> 200px tall header image that has gobbled 160kb of traffic
Goodbye, /r/metalgearsolid, I'm not opening you again.
Oh, and it's actually 400px until actual content. Browsing that on a laptop with 768px tall screen is fun!
Edit: they didn't even bother optimizing it. Their 160kb header is easily compressible to 60kb simply by running it through pngquant.
Edit 2: messaged the mods of that sub and provided links to a compressed header image as well as a web frontend to pngquant
in Google Page Insights i am now on desktop at 100...mobile is always a bit lower.
i squeeze with https://squoosh.app/ .
i use mozjpg.
try to use some cache plugins. i use: WP Fastest Cache premium and Asset CleanUp: Page Speed Booster. (clean unneeded assets from pages)
also i use cloudflare (intl. site) plus their plugin plus WP Cloudflare Super Page Cache
​
with all that it is pretty easy to go to the top...
good luck
Note that you can also use `drawable-nodpi/` to prevent the system from resizing your image. The real question is what do you intend to do with those image? These resource qualifies are there to give you the ability to provide images dimensioned for certain display densities to ensure a consistent physical size on screen while retaining the best quality. If your image can handle being up/downscaled at runtime or if you provide the user the ability to pan/zoom the image, etc. then you should place it in `drawable-nodpi/`.
As whether your image is too large (in terms of resolution or file size), that's a question that only you can answer. Considering that there are very few phones with displays able to show your entire image at 100% zoom, you probably don't need to supply it as a 4,000x3,000 file. You should also look into better compression if possible (try higher JPEG compression ratios or other formats like WebP, check out https://squoosh.app to test various methods).
Perhaps even more surprising is that they don't serve webp or avif to supported browsers, despite it being something they like to encourage on other sites with their own tools. Putting it through squoosh.app (a tool made by Google employees) suggests a 50% reduction with no noticeable loss in quality. With the picture/source syntax, it should remain compatible with even the oldest of browsers (which I assume is a big reason).
When it comes to being able to manipulate images, there are ways to process jpeg and other formats through javascript as well. There is this nice app by Google devs to help you compare image formats for instance, and can re-encode stuff on the fly to help you test different encoding options etc: https://squoosh.app/
Yes, jpeg is not xml and SVG is not binary, and I don't care if you think vector graphics should be called an image or not, for usage on the web they are sufficciently-image to be used as image. You can render all of them to the screen in a browser and you can edit them all with javascript.
Not to burst your bubble, but save for web does an average job at best for image compression. If you're on a Mac, I highly recommend image optim (http://imageoptim.com) and compress all your images through there before publishing them online.
It's very pretty, but I'm getting some flickering on the title images in the teal header for each section.
Also, I'm pretty sure that with some good image optimization you could cut down on load times quite a bit. Dunno if you know about it, but ImageOptim has been a life saver for me.
I'm using OSX 10.8.2, Safari 6.0.2.
Don't automatically panic. I would do the following:
I think you will find just doing those few things will significantly increase your score.
Also speak to your web company, and ask them to try to increase your score on web.dev. I am sure there are a number of other things they could do to cut out the bloat, but that will involve looking at the plugins etc.
I can recommend Caesium , a utility (Windows, MAC version in Alpha test) to remove all EXIF, metadata etc which will reduce your JPG in size quite a lot without using higher JPG-compression (lower quality)
It's free. Try it. You'll be surprised what an amount of "junk" your JPG's contain :)
You can also use other tools for example imagemagick http://www.imagemagick.org/script/index.php copy all pics in one folder and the create the out jpg with this command:
convert *.jpg -evaluate-sequence median OUT.jpg
Credit to this guy: https://patdavid.net/2013/05/noise-removal-in-photos-with-median_6/