Hello!
I would like to suggest adding more practical explanations for the image saving options. I guess there are a few computer scientists who understand what a Huffman table is and why it should or should not be optimized or what does it mean for a DCT method to be slower or faster. But the rest of us are left with experimenting. So, some will take time to do the experiments, better for them. The rest would ignore the options or try a wild guess.
So, particularly, what options I didn't understand (and whose explanations I couldn't find in the XnView's User Guide (Info -> Help)) recently, are:
"Progressive" (What it means and what it affects.)
"Optimize Huffman table" (Explanation of meaning could go to the user guide, but the evident effect on the result image should be in the tooltip — how do I tell should I check it or not?)
"DCT Method" (It is useful to have a note "best" or "worst" at the options but I would also like to know what it actually changes.)
"SubSampling factor" (Nice to see an option with a note "best quality", but what are the two others: which one of those is better? And how is this "quality" different from the 0-100 quality I could choose above?)
I have a hunch of what a quality or smoothing is and I understand what it means to keep or not to keep some meta data. But let me use the other options, too! I may not be a very advanced computer user, programmer or mathematician, but you just explain those options to me properly and I'll be happy to use them! :) That's why they're there, right? Because you wanted me (or all the rest of the XnView users) to use them!
More explanations for simple users
Moderators: XnTriq, helmut, xnview
-
- XnThusiast
- Posts: 2443
- Joined: Sun May 15, 2005 6:31 am
Re: More explanations for simple users
Drahken's recent post is the best explanation about jpeg so far-
jpg output: What does subsampling, DCT, ... do?.
About optimizing huffman table: I have read that this tunes compression using data from image rather than using defaults the math professors have provided. I have seen a marginal reduction in file size for many (not all) images using this. Momentarily, I'll use the stop-watch to see if I can notice anything practical.
jpg output: What does subsampling, DCT, ... do?.
About optimizing huffman table: I have read that this tunes compression using data from image rather than using defaults the math professors have provided. I have seen a marginal reduction in file size for many (not all) images using this. Momentarily, I'll use the stop-watch to see if I can notice anything practical.
-
- Posts: 11
- Joined: Thu Jun 18, 2009 5:36 pm
Re: More explanations for simple users
Thanks for the link and the explanation! Drahken's explanations are very useful too. I would really suggest to add those to XnView's help file / User guide.
So, about the Huffman table's optimization. Why is it disabled by default? Are there any drawbacks when it is enabled?
So, about the Huffman table's optimization. Why is it disabled by default? Are there any drawbacks when it is enabled?
-
- XnThusiast
- Posts: 2443
- Joined: Sun May 15, 2005 6:31 am
Re: More explanations for simple users
I don't know. Someone else should answer. I haven't noticed any decoding speed differences so far. If it mattered much, I'd expect it to be noted in options...Janis wrote: So, about the Huffman table's optimization. Why is it disabled by default? Are there any drawbacks when it is enabled?
-
- Moderator & Librarian
- Posts: 6386
- Joined: Sun Sep 25, 2005 3:00 am
- Location: Ref Desk
Re: More explanations for simple users
Wikipedia (JPEG » [url=http://en.wikipedia.org/wiki/JPEG#Entropy_coding]Entropy coding[/url]) wrote:Entropy coding is a special form of lossless data compression. It involves arranging the image components in a “zigzag” order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left.
<snip>
It has been found that Baseline Progressive JPEG encoding usually gives better compression as compared to Baseline Sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each “scan” or “pass” (which includes similar-positioned coefficients), though the difference is not too large.
<snip>
The JPEG standard provides general-purpose Huffman tables; encoders may also choose to generate Huffman tables optimized for the actual frequency distributions in images being encoded.
WebReference.com (Optimizing Web Graphics » [url=http://www.webreference.com/dev/graphics/compress.html]Compression[/url] » JPEG Enhancements) wrote:Huffman Code Optimization (most offer this feature) — Generates a custom “code table” that works best to compress your individual image instead of using a standard generic code table that works OK for most everything.
ImpulseAdventure ([url=http://www.impulseadventure.com/photo/optimized-jpeg.html]What is an Optimized JPEG?[/url] » Does JPEG Optimization affect Image Quality?) wrote:The huffman table optimization is a lossless (reversible) process that has absolutely no effect on the resulting image quality. If one has the option, it is almost always best to enable JPEG optimization. The extra file size savings can't hurt. However, as it may potentially reduce compatibility with some bad JPEG decoders, this may be enough of a reason for you to disable it.
Patrick Chase @ photo.net Forum ([url=http://photo.net/bboard/q-and-a-fetch-msg?topic_id=23&msg_id=0013B3]JPEG Baseline (“Standard”) vs. Baseline Optimized[/url]) wrote:Clarifying a bit:
Photoshop's “baseline” JPEG option means that it uses the default quantization divisor tables and Huffman dictionaries found in the JPEG standard (these are the tables specified on pg 37 and pp 509-517 of the “pink book” version of the standard, for anybody masochistic enough to care).
Photoshop's “baseline optimized” option means that the file still conforms to the restrictions of the baseline JPEG file format and of the JFIF recommendation, but that the actual quantization and Huffman tables (the values of the data items associated with DQT and DHT markers) are different from those presented in the standard. In practical terms this should have no effect on any halfway-decent decoder, because the decoder shouldn't be assuming anything about the Q and Huffman tables anyway (i.e. the decoder should be parsing the DQT and DHT markers to get the tables).
In practical terms, I've never seen a decoder that couldn't handle the output of Photoshop's “baseline optimized” mode. That most emphatically includes ‘xv’, with which I'm quite familiar because I've modified it to support various homebrewed image file formats that we use internally.