Mobile app version of vmapp.org
Login or Join
Frith110

: Will scaling down incrementally hurt quality? In Photoshop, will there be a difference in quality when a raster is scaled down 75% once as opposed to being scaled down 50% twice? In both cases,

@Frith110

Posted in: #AdobePhotoshop

In Photoshop, will there be a difference in quality when a raster is scaled down 75% once as opposed to being scaled down 50% twice? In both cases, the final size will be the same: 25% of the original.

The reason I ask is because sometimes I want to scale down an image that I know has been scaled down previously. I hate having to CTRL+Z (undo) a hundred times to the state where the image was in its original size. If final quality is not affected, I'd rather just scale the image down right there and then.

10.05% popularity Vote Up Vote Down


Login to follow query

More posts by @Frith110

5 Comments

Sorted by latest first Latest Oldest Best

 

@Si6392903

Most probably yes, but in most cases you won't be even able to notice the difference.

Edit: I see that people don't like my answer :). Maybe because it's simple. IMHO it doesn't make it less true. Well… prove me wrong :).

Edit 2: I wanted to keep my answer brief but… :)

Q: In Photoshop, will there be a difference in quality when a raster is scaled down 75% once as opposed to being scaled down 50% twice? In both cases, the final size will be the same: 25% of the original.

A:


"Most probably yes" – take a look at muntoo's post. He says that each interpolation step introduces some minor errors. They are rounding or representaion errors and they can contribute to quality degradation. Simple conclusion: more steps, more possible degradation. So "most probably" image will loose quality during each scaling step. More steps – more possible quality degradation. So "most possibly" image will be more degraded if scaled in two times than in one. Quality loss is not certain — take a solid color image for example, but how often will any designer scale similar images?
"but in most cases you won't be even able to notice the difference" – again – muntoo's post. How big are potential errors? In his examples are images scaled not in 2 but in 75 steps and changes in quality are noticable but not dramatic. In 75 steps! What happens when image is scaled to 25% in Ps CS4 (bicubic, muntoo's sample, scaled in one and two steps accordingly)?




Can anyone see the difference? But the difference is there:

#: gm compare -metric mse one-step.png two-step.png Image Difference (MeanSquaredError):
Normalized Absolute
============ ==========
Red: 0.0000033905 0.0
Green: 0.0000033467 0.0
Blue: 0.0000033888 0.0
Total: 0.0000033754 0.0


And can be seen if properly marked (gm compare -highlight-color purple -file diff.png one-step.png two-step.png):



1 and 2 makes my answer, which I hoped to keep brief, since other were quite elaborate ;).

That's it! :) Judge it yourself.

10% popularity Vote Up Vote Down


 

@BetL875

JoJo asks about quality. Most of the responses are about pixel accuracy, which is all but irrelevant for a designer, or even a photographer.

Quality is a measure of how convincing and pleasing the end result is, not how "accurate" it is. As a great case in point, Cloning or Content Aware Fill replace unwanted parts of an image with plausible pixels: they look right, but they certainly can't be considered accurate.

In Photoshop, the main practical difference between downsizing incrementally vs. downsizing in one shot is that it takes a lot longer. If you charge by the hour, by all means go 1% at a time. If not, downsize in one shot. Make the image a Smart Object first, in case you ever want to make a bigger version later.

No matter what algorithm you use (and Dawson's comment about those algorithms is dead on -- they are amazing), downsizing throws away pixels. The algorithm subtracts pixels and modifies others by guessing how to make them look right. A good algorithm makes good guesses; it gives you a result that looks convincing, but it's not accurate in any meaningful sense. Honestly, accurate -- other than color! -- is not what you're looking for unless you're a scientist, in which case you probably would not be downsizing in the first place.

An image that's been downsized using the usual bicubic algorithm often benefits from a little bit of sharpening, but if you're creating jpegs for the web, sharpening will increase the file size.

Correct quality in design is the quality you need for your end product. Anything beyond that adds time, but not value, to your work.

[Edit: Since there was a mention of enlarging in koiyu's revival of this question. I've added some comments on that subject.]

There's an idea kicking around that if you up-rez an image in small steps, as opposed to a single giant leap, you get a slightly better ("slightly less bad" would be more accurate) result. Scott Kelby promoted the idea some years ago, and it may have been true as of PS 7. I've not seen anything that convinced me that it's correct today. It didn't prove out in my own tests, back around PS CS2 and 3, but it has to be said that I didn't waste a lot of time on them.

I didn't spend time on deep testing because the slight difference between "degraded image quality" and "slightly less degraded image quality" has no practical value: neither is usable. In my own work, my simple rule is, "Don't upsize." As a matter of practicality in design work, an image that is a bit too low resolution for a particular purpose always looks better used as-is than that same image "up-sized" to the "correct" resolution by any process I've come across, including fractal and bicubic variations.

10% popularity Vote Up Vote Down


 

@Kevin459

This question is AWESOME! ... I think we're all getting too technical though.

100 x 100 pixel image = 10000 total pixels

Scaling an image down pulls pixels out. Scaling up adds them. Either way the software takes an "educated guess" as to alter the file.

A single reduction: 90 x 90 (1900px removed from the original file information)

2 Step reduction: 95 x 95 (975px removed), 90 x 90 (another 925). The detail to catch here is that of the total 1900px removed - 975 of them were NOT part of the original information.

The original image is always the best. Fewer "generations" always equates to better quality (closest to the original quality).

PROOF (and a response to @mutoo 's comment)



It's simple...it's an algorith...it's not a set of human eyes. There are 3 colors here. 100% black, 50% black, and white (gray scale image). No matter how I scale it - image size menu, transform tool, RGB, CMYK, 100 x 100px, 10 x 10in, the results are the same:

Along the black/gray edge you find 80% black (a color that doesn't exist). Along the white/gray edge you find 7% black (doesn't exist). [not an invitation for anti-alias argument here]

As we all know (being human, and all), a perfect reduction or enlargement would produce a Black/Gray/White striped box. And I still found that a single iteration (up or down) created a better replica than multiple.

10% popularity Vote Up Vote Down


 

@Caterina889

It's community wiki, so you can fix this terrible, terrible post.



Grrr, no LaTeX. :) I guess I'll just have to do the best I can.



Definition:

We've got an image (PNG, or another lossless* format) named A of size Ax by Ay. Our goal is to scale it by p = 50%.

Image ("array") B will be a "directly scaled" version of A. It will have Bs = 1 number of steps.

A = BBs = B1

Image ("array") C will be an "incrementally scaled" version of A. It will have Cs = 2 number of steps.

A ≅ CCs = C2



The Fun Stuff:

A = B1 = B0 × p

C1 = C0 × p1 ÷ Cs

A ≅ C2 = C1 × p1 ÷ Cs

Do you see those fractional powers? They will theoretically degrade quality with raster images (rasters inside vectors depend on the implementation). How much? We shall figure that out next...



The Good Stuff:

Ce = 0 if p1 ÷ Cs ∈ ℤ

Ce = Cs if p1 ÷ Cs ∉ ℤ

Where e represents the maximum error (worst case scenario), due to integer round-off errors.

Now, everything depends on the downscaling algorithm (Super Sampling, Bicubic, Lanczos sampling, Nearest Neighbor, etc).

If we're using Nearest Neighbor (the worst algorithm for anything of any quality), the "true maximum error" (Ct) will be equal to Ce. If we're using any of the other algorithms, it gets complicated, but it won't be as bad. (If you want a technical explanation on why it won't be as bad as Nearest Neighbor, I can't give you one cause it's just a guess. NOTE: Hey mathematicians! Fix this up!)



Love thy neighbor:

Let's make an "array" of images D with Dx = 100, Dy = 100, and Ds = 10. p is still the same: p = 50%.

Nearest Neighbor algorithm (terrible definition, I know):

N(I, p) = mergeXYDuplicates(floorAllImageXYs(Ix,y × p), I), where only the x,y themselves are being multiplied; not their color (RGB) values! I know you can't really do that in math, and this is exactly why I'm not THE LEGENDARY MATHEMATICIAN of the prophecy.

(mergeXYDuplicates() keeps only the bottom-most/left-most x,y "elements" in the original image I for all the duplicates it finds, and discards the rest.)

Let's take a random pixel: D039,23 . Then apply Dn+1 = N(Dn , p1 ÷ Ds) = N(Dn , ~93.3%) over and over.

cn+1 = floor(cn × ~93.3%)

c1 = floor((39,23) × ~93.3%) = floor((36.3,21.4)) = (36,21)

c2 = floor((36,21) × ~93.3%) = (33,19)

c3 = (30,17)

c4 = (27,15)

c5 = (25,13)

c6 = (23,12)

c7 = (21,11)

c8 = (19,10)

c9 = (17,9)

c10 = (15,8)

If we did a simple scale down only once, we'd have:

b1 = floor((39,23) × 50%) = floor((19.5,11.5)) = (19,11)

Let's compare b and c:

b1 = (19,11)

c10 = (15,8)

That's an error of (4,3) pixels! Let's try this with the end pixels (99,99), and account for the actual size in the error. I won't do all the math here again, but I'll tell you it becomes (46,46), an error of (3,3) from what it should be, (49,49).

Let's combine these results with the original: the "real error" is (1,0) . Imagine if this happens with every pixel... it may end up making a difference. Hmm... Well, there's probably a better example. :)



Conclusion:

If your image is originally a large size, it won't really matter, unless you do multiple downscales (see "Real-world example" below).

It gets worse by a maximum of one pixel per incremental step (down) in Nearest Neighbor. If you do ten downscales, your image will be slightly degraded in quality.



Real-world example:

(Click on the thumbnails for a larger view.)

Downscaled by 1% incrementally using Super Sampling:







As you can see, the Super Sampling "blurs" it if applied a number of times. This is "good" if you're doing one downscale. This is bad if you're doing it incrementally.



*Depending on the editor, and the format, this could potentially make a difference, so I'm keeping it simple and calling it lossless.

10% popularity Vote Up Vote Down


 

@Vandalay110

Generally multiple scalings will reduce quality over a single scaling to final size, but often the difference will be minimal. In particular, scaling smaller by exact ratios, such as your example of (2:1, 2:1) versus (4:1), will have very small degradation compared to the single scaling. It's best, of course, to do all modifications in the highest resolution and then scale only once at the end. When the exact scaling is not initially known, one can do a series of test scalings to find the right size, then note the size, throw the test image away, and perform a single scaling to that size from the original.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme