Tag Archives: post-processing

End of life for Denoise Projects, and special offer

18 Jul

The imaging software industry is in motion. While companies like Serif (makers of Affinity-branded software) and Macphun are making a dash from the Mac to the Windows platform, and following Google’s recent abandonment of the Google Nik Collection of plug-in and stand-alone applications for tasks such as black and white conversion, colour adjustment, film simulation, sharpening and denoising, another applicant for this market is now showing signs of slowing down.

Like Google, Macphun and others, German publishing house Franzis also develops a variety of tools in its “Projects” series for the above tasks, each usually sold separately. Among them are tools for HDR and focus stacking, and Franzis also develops and sells other imaging software not branded as “Projects”, and is the German distributor of Silkypix raw processing software.

Franzis has now announced discontinuing Denoise Projects Professional, a specialised program and plug-in for removing digital image noise. It is unclear if other discontinuations will follow.

According to the publisher, Denoise Projects automatically detects and removes “all” seven types of noise. Other Franzis Projects products work with a high floating point bitrate, and the tech specs for Denoise Projects imply that it requires 32 bit GPU acceleration, while the FAQ mentions it can save 32 bit TIFF. It was not explicitly stated whether 32 bit floating point processing is used internally.

Those interested can find a 70% off offer here.


ISO 800,000 sample from Pentax KP

30 Jan

Above is an ISO 819,200 shot from Ricoh’s promotional materials for the Pentax KP, with light post processing from JPEG applied by the breakfastographer. Click on the image to enlarge to full size. The original file can be found here.

Update 11/02/2017:

Also check out this comparison of high ISO noise in the Pentax KP and K-70.

Three neat pieces of imaging research

30 Sep
  1. Correcting for all sorts of flaws in cheap lenses using software
  2. Recording a photograph from the point of view of the light source, not the camera
  3. Extracting or manipulating 3D objects inside a photograph


Hope you enjoyed.

JPEGmini: The results are deflating

8 Feb

After reading about JPEGmini, which promises to reduce JPEG file size up to five times, I got curious enough to try it. First off, the web-based version didn’t work in any browser I tried. It turns out it’s actually written as a Flash applet, so it should work the same in all browsers. Writing a tool, publishing it, and then not testing it makes you look like one.

Moving on from that pertinent aside, I should briefly explain that the JPEGmini algorithm is advertised as taking into account properties of human visual processing to produce a JPEG file that is visually indistinguishable from the original, but much smaller. That may have well all been true until JPEGmini met the pixelpeeper that is yours truly.

I mostly photograph birds, which are the most challenging class in terms of photographic technology, because they are both prone to moire and aliasing in their feathers, as well as very noticeable loss of detail from unsharp lenses, strong anti-aliasing filters, lossy file compression, etc. Additionally, they are often so colourful that even when correctly exposed overall, they can blow colour channels, again causing loss of detail. However, they were perfect for this test, where preservation of detail was the major concern.

Using images that had been raw-processed into GIMP using UFRaw and then exported at JPEG quality 100, I created a JPEGmini-converted version in a separate folder, and compared the two versions one on top of the other. Images came from a camera with a weak AA filter, and had no resizing applied to them. So I was looking at real, maximum detail, but downsizing with a detail-preserving algorithm could have shown up more severe differences than here reported.

The compressed size using JPEGmini was approximately the same as a GIMP-exported JPEG at quality 78. The GIMP-exported version did a better job of retaining sharpness, and if there was a difference in colour retention, it was not apparent. Hence, in the case of GIMP, reducing quality directly may be a better strategy than re-running the file through JPEGmini. Having said that, I have always been pleased with GIMP’s JPEG output, even at lower quality settings, so beating GIMP may have been an extreme challenge. Whether this hints that GIMP’s own engine may invalidate some of JPEGmini’s patents (by way of prior art), is for others to decide.

As a concluding remark, you may still see benefits from JPEGmini when working with tools other than GIMP, and with cameras that have a stronger AA filter, or with upsampled (digitally enlarged) images. It’s also a good quick fix when you just need to send an image through email or instant messenger, where size may make a huge difference and detail is less important – opening GIMP for these jobs would be much more time-consuming.

Further testing footnotes

Taking the JPEGmini-converted images back into GIMP and saving them again, or resizing, then saving brought the following results. The original JPEG size was 5.6MB, the minified size was 890kB, a version regimped at GIMP’s suggested 90 setting was 1MB, at 100 it was 2.4MB. If we assume that the GIMP JPEG compression algorithm doesn’t benefit from looking over JPEGmini’s shoulder (and hence manage to compress better just because someone else has shown the way), we’d have to take this a very rough indication that roughly 60% of actual JPEG detail is lost when minifying – remember this includes both colour information and sharpness. I could theorise how JPEGmini does this, but I might be bereaving myself of a patent, so I won’t. 🙂 Regimping after minifying, incidentally, should not really be recommended for any reason – it was merely done here for investigative purposes.