HQ Resizing: Testing speed and asking for new options

Ideas for improvements and requests for new features in XnView Classic

Moderators: XnTriq, xnview

Post Reply
User avatar
foxyshadis
Posts: 387
Joined: Sat Nov 18, 2006 8:57 am

HQ Resizing: Testing speed and asking for new options

Post by foxyshadis »

This comes from my post in a bug report.... The actual suggestions are bolded for quick skimming.

Musing:
First, I'm utterly astounded, because I made a measurement tonight of the exact cpu time taken by the standard fullscreen HQ (bilinear), and the slide show (shown to be lanczos). The results were opposite of my expectations, and point to either a severe bug or problem with the bilinear algorithm, or a major crimp in the fullscreen codepath somewhere. First numbers.

My testing methodology: Insert some tiny 'filler' images around the one of interest to eliminate any decoding overhead. (Read ahead and remember previous are on.) View the pic, flip forward, and then read out the process's CPU time. Flip back and compare the difference. Repeat a few times. Both use the same text overlays with the same font, etc.

3543x1772 -> 1280x640
fullscreen : .3s to .5s
slide show : .5s to .6s

70x70 -> 1024x1024
fullscreen : .2s to .3s
slide show : less than .1s!

Suggestion 1:
With this information, I not only recommend again to use lanczos to upscale small images from a quality perspective, but now a speed perspective! The tenths-of-a-second difference actually is noticable, when it's more than twice as long. I have no particular opinion on downscaling, but at least it acts closer to how it should, but lanczos should be ~2 times as slow as bilinear, so the results are still somewhat anomalous, pointing to a bottleneck in the fullscreen code that isn't present in the slide show. Maybe this is related to the same anomalous behavior Olivier_G noticed.

(Running a similar test in avisynth, but at 8 times the resolution with fully optimized resizers, shows ~2x difference, .3s vs .6s for 1024x1024 -> 8192x8192. Running the same test it'd be unmeasurable. That's why I always say "slow" when referencing xnview's, I don't mean to be harsh.)

Suggestion 2:
At least give us a way to choose the resizer if you won't change the default. Both for testing speed and sharpness, people can choose their own tradeoff. This would fit fairly naturally in the gui, except that different modes make sense for up/down. An ini-only option would of course make sense.

Musing again:
I've been thinking about the resizing modes lately. I actually think the current behavior of browser's fullscreen is the best default, but it's not at all obvious why and obviously not configurable. My reasoning is that when someone wants to zoom in above full screen, they probably don't want to wait around in the quicker fullscreen, but if they're in a mode designated for editing speed is not so much of the essence.

(I don't think edit mode even needs a fullscreen, let alone a re-implemented and subtly different one, because I think conflating browsing and editing too closely is actually confusing and problematic - but that's how xnview in built and I'm probably in the minority.)

Suggestion 3:
My only change would be to drop filtering to LQ at 1.5x screen size or 2x screen size, instead of 1x, which extends the useful HQ zoom size for nearly-full-screen pics without being overly burdensome. 2x screen size is where it slows irritatingly, and 4x is the point at which it becomes unusable - of course, since filtering speed is largely dependant on output size, this is unrelated to the actual zoom level. Also, I freely understand I'm on the higher end of systems, and on others even 1x may be burdensome - which I guess was the reason filtering in the quick/"browser" FS mode wasn't around in the first place for a long time.

This is probably one of those things best set to a 'reasonable' default and only changeable in the ini, since I can't think of any way to fit it into the gui without confusing people even more.

Suggestion 4:
In the fullscreen mode, the right-click menu could use a way to force HQ on/off quickly. I believe it's something that makes sense for switching on the fly. ("HQ Mode"->"Default" "On" "Off" "Switch Default On/Off" - separator - "Bilinear (Smooth)" "Bicubic (Sharp)" "Lanczos (Sharper)", of course hoping to have the algorithm selectable in the future ;) ) Another of those things filling up the huge right-click menu; those faststone popout menus are really handy for preventing that. :p

That last suggestion might be most controversial and it's the one I'm least attached to. It's a nice idea but with the proper setup it should rarely be needed, unless you often switch between photo and sprite/icon graphics.

Now to find out if I'm full of it, or if I just wrote too much for anyone to read. ;)
User avatar
JohnFredC
XnThusiast
Posts: 2010
Joined: Wed Mar 17, 2004 8:33 pm
Location: Sarasota Florida

Post by JohnFredC »

I find this topic very interesting because the HQ mode is just too slow for my use on my 2.8ghz /1G/2G swap PC.

...mostly because the images I view are 4 MP or (much) larger. For the largest images (~20k x ~10k currently), there is already so much detail that the smoothing which HQ provides isn't really necessary until very high levels of magnification.

However, it would be great to enable HQ "on the fly" when zoomed 100% or more even on those large images.

Long ago I suggested a strategy for speeding up HQ. Essentially it involved smoothing sectors of the image one at a time (instead of the entire image), starting with the visible portion of the image and then using some kind of predictive algorithm (based on the user's movement around the image via mouse or cursor keys) to select which sectors to do next.

That might not be any faster, though, because of the overhead from paging the smoothed image data into the display buffer... Still, it's an idea worth investigating.
John
User avatar
foxyshadis
Posts: 387
Joined: Sat Nov 18, 2006 8:57 am

Post by foxyshadis »

That's an interesting point, and the way I'd go about having a reasonable HQ speed for those is like:

For every image where one dimension is between 2.1x and 4x the screen size, sample the image (LQ) at half resolution and take the HQ smoothing of that. For everything above 4x, sample (LQ again) to 2x the screen size and smooth with that. If beat frequencies prove to be a problem, you can have another 3x decimation range between 4x and 6x, but I don't think they'll be enough to matter by that point.

Thus the most you're ever going to have is 2x the screen size to work with. And even 20kx10k can be decimated to (say) 2560x1280 very quickly, and it'd still look nice and smooth when displayed. [Edit: Actually 2x is only a 'safe margin' guess, I'm guessing that if you subsample enough to keep it between 1.2x and 1.5x that will be sharp enough, but I have to test first.]

Just a cheap way of combining the two, since HQ is definitely overkill but pure LQ, even on the highest resolutions, can still look a little jagged sometimes.

(In the meantime, a quick HQ on/off would still be welcomed.)

A sector at a time works reasonably, until you want to move around, then you have to wait while another part renders. :( As a way to quickly get something onscreen while the full image is being drawn, it'd be great, though. Displaying things is effectively instant, especially if they're immediately thrown away, so you don't have to worry about a slowdown there.
Last edited by foxyshadis on Wed Feb 21, 2007 1:00 am, edited 1 time in total.
User avatar
JohnFredC
XnThusiast
Posts: 2010
Joined: Wed Mar 17, 2004 8:33 pm
Location: Sarasota Florida

Post by JohnFredC »

I wasn't thinking about throwing away smoothed sections... that seems really wasteful. Rather, it would be better to store them in a buffer and examine the buffer before smoothing the next candidate section.

Each zoom level would have its own section grid "table" based on:
  • 1. the relationship between the viewer's pixel dimensions and the zoom level.
    2. The XY center of the currently viewed section.
The sections should probably overlap by a certain percentage.

The logic would cause XnView to smooth sections spirally outward from the currently-viewed section toward the edges of the image, resetting the center of the "spiral" to the section that results when panning is detected (and restarting the "spiral" algorithm some small "hesitation" interval after panning ceases). That way chances are better that the sections adjacent to the viewed section have already been smoothed before the next panning event (which naturally has to navigate through the adjacent sections toward its next destination). The algorithm might bump the smoothing priority of untraversed sections that lie in the extended path of the most recent panning vector. It wouldn't be "right" all of the time, but it seems reasonable to watch user behavior and extrapolate.

Eventually all sections would have been smoothed. Perhaps XnView could optionally ask to save the smoothed image and use it next time (even if the image smoothing hasn't completed... no point in deliberately wasting all that work). If the user agrees to save the smoothed image (whether completed or partial), XnView could reuse it the next time the user viewed the file at that zoom level. This might make sense if the user frequently re-views same image at that zoom level. From time to time XnView could purge the smoothed data based on some options set by the user. Or post to an independent image file.

In any case, a very memory intensive affair for large images, with a complex, changing, "what to smooth" decision tree. And likely total overkill for small to medium sized images.
John
Post Reply