Tag Archives: jpeg

Google open-sources extreme JPEG compression

18 Mar

I previously criticised JPEGmini, a commercial program that promised to reduce JPEG size by focusing on the way humans perceive colour. My repeated finding in using the program was that it was unable to provide better compression than GIMP’s JPEG export – and GIMP is free!

Now Google has fielded Guetzli, a new JPEG compression engine that Google describes as drawing on psychovisual research. So in that aspect, it doesn’t sound much different from JPEGmini. It is, however, open-sourced, and according to Google, promises 35% smaller files compared to the open source JPEG library libjpeg, which without much doubt must be the engine used by GIMP.


Original on the left, libjpeg, center, and Guetzli on the right. Google notes the paucity of blocky artefacts in the Guetzli output, but I would also note the loss in vibrance, a typical approach in noise reduction.

Guetzli, like other recent advances in imaging, is slow. So slow, in fact, that it sounds like its use case is currently limited to very frequently downloaded items like website banners, where spending a lot of time optimizing will considerably reduce traffic.

Alternatively, though, Google could put its weight behind FLIF, supporting it in Chrome/Chromium and thereby putting pressure on other browsers to also support it. FLIF promises to provide current-best lossless compression and is interlaceable – and open source.

Guetzli might give a smaller file than FLIF, but only with more lossy compression, and it sounds like it’s currently a lot slower (I have seen no figures comparing the two).

From photographers’ perspectives, Guetzli may have one further use case. Images with such highly optimised compression are unlikely to be very usable for further editing, so for showing off your work on the web while spoiling the fun for potential thieves, Guetzli or any higher-compression tool might be a good choice (but remember – Google can “guesstimate” the image back with RAISR).


Pentax KP vs. Nikon D500: white balance

16 Mar

Continuing my series on the Pentax KP, and possibly starting a sub-series comparing the Nikon D500 against it, today my attention was drawn to ephotozine’s Pentax KP sample photos, particularly the colour section:


Pentax KP (top) vs. Nikon D500 – ISO 200 (left) to ISO 819,200.

Particularly the ISO 819,200 sample from Nikon seems to be soaked in yellow, although the ISO 409,600 sample also seems affected. Here’s the 819,200 comparison, with two attempts to fix the Nikon’s white balance in post:

Screen Shot 2017-03-16 at 00.21.40

Pentax KP (left) vs. Nikon D500 OOC, Nikon D500 with my own special WB procedure, and finally Nikon D500 after “color->auto->white balance” from Gimp

If “fluorescent” white balance is used in the Pentax, it gets even further ahead of the Nikon – an unfair comparison perhaps, but it’s only a single step of configuration and straight out of camera:

Screen Shot 2017-03-16 at 00.05.09.png

With even more prodding, I eventually got the Nikon image to behave. I managed to keep noise levels on par, but keep in mind that it’s a fair amount of work, and you really have to know what you’re doing in Photoshop or Gimp to get this kind of result – remember the one button fix is the image on the far right, and it improves things, but doesn’t really “fix” the problem. Cutting to the chase, here’s that final result:

Screen Shot 2017-03-16 at 00.48.24

Even using the Pentax’ default white balance, it does impressively well, keeping in mind we had to massage the Nikon image for several minutes to get it into decent shape:

Screen Shot 2017-03-16 at 00.52.50

The bottom line is that in terms of colour, the Pentax produces reasonable JPEG output even at very high ISO, while the Nikon D500 takes considerable time in post-processing to achieve a competitive result. The Nikon is not usable as a JPEG camera at this high ISO and I would instead recommend, if using the D500 at all, to shoot raw and use a raw converter, in which case it’s the raw converter’s job to give you a reasonable-looking image (this will be the next part in this series, if time allows).

JPEGmini: The results are deflating

8 Feb

After reading about JPEGmini, which promises to reduce JPEG file size up to five times, I got curious enough to try it. First off, the web-based version didn’t work in any browser I tried. It turns out it’s actually written as a Flash applet, so it should work the same in all browsers. Writing a tool, publishing it, and then not testing it makes you look like one.

Moving on from that pertinent aside, I should briefly explain that the JPEGmini algorithm is advertised as taking into account properties of human visual processing to produce a JPEG file that is visually indistinguishable from the original, but much smaller. That may have well all been true until JPEGmini met the pixelpeeper that is yours truly.

I mostly photograph birds, which are the most challenging class in terms of photographic technology, because they are both prone to moire and aliasing in their feathers, as well as very noticeable loss of detail from unsharp lenses, strong anti-aliasing filters, lossy file compression, etc. Additionally, they are often so colourful that even when correctly exposed overall, they can blow colour channels, again causing loss of detail. However, they were perfect for this test, where preservation of detail was the major concern.

Using images that had been raw-processed into GIMP using UFRaw and then exported at JPEG quality 100, I created a JPEGmini-converted version in a separate folder, and compared the two versions one on top of the other. Images came from a camera with a weak AA filter, and had no resizing applied to them. So I was looking at real, maximum detail, but downsizing with a detail-preserving algorithm could have shown up more severe differences than here reported.

The compressed size using JPEGmini was approximately the same as a GIMP-exported JPEG at quality 78. The GIMP-exported version did a better job of retaining sharpness, and if there was a difference in colour retention, it was not apparent. Hence, in the case of GIMP, reducing quality directly may be a better strategy than re-running the file through JPEGmini. Having said that, I have always been pleased with GIMP’s JPEG output, even at lower quality settings, so beating GIMP may have been an extreme challenge. Whether this hints that GIMP’s own engine may invalidate some of JPEGmini’s patents (by way of prior art), is for others to decide.

As a concluding remark, you may still see benefits from JPEGmini when working with tools other than GIMP, and with cameras that have a stronger AA filter, or with upsampled (digitally enlarged) images. It’s also a good quick fix when you just need to send an image through email or instant messenger, where size may make a huge difference and detail is less important – opening GIMP for these jobs would be much more time-consuming.

Further testing footnotes

Taking the JPEGmini-converted images back into GIMP and saving them again, or resizing, then saving brought the following results. The original JPEG size was 5.6MB, the minified size was 890kB, a version regimped at GIMP’s suggested 90 setting was 1MB, at 100 it was 2.4MB. If we assume that the GIMP JPEG compression algorithm doesn’t benefit from looking over JPEGmini’s shoulder (and hence manage to compress better just because someone else has shown the way), we’d have to take this a very rough indication that roughly 60% of actual JPEG detail is lost when minifying – remember this includes both colour information and sharpness. I could theorise how JPEGmini does this, but I might be bereaving myself of a patent, so I won’t. 🙂 Regimping after minifying, incidentally, should not really be recommended for any reason – it was merely done here for investigative purposes.