Mobile app version of vmapp.org
Login or Join
Murray155

: Why is px often prefered over pt even though both depend on an "unknown" device parameter? I always heard that in web design one should not prefer pt as a size unit, because the browser/OS

@Murray155

Posted in: #Fonts #ScreenSize #WebsiteDesign

I always heard that in web design one should not prefer pt as a size unit, because the browser/OS does not neccessarily know the correct dpi. Result: final font sizes are not equal on different computers (Mac and Windows have different default dpi, for example).

But isn't this true for the "prefered" px as well? If I buy a FullHD monitor with 20" diameter and a FullHD monitor with 40" diameter, the font sizes will differ by a factor of two, because the physical pixel sizes are twice the size on the bigger monitor compared to the smaller one. So, for px I have the same effect, because the browser does not know the device size.

Conclusion: If I take a ruler and measure a pt based font on two different devices I can get different results (unknown real dpi size). But for sure I can get different results for a px based font size as well just because the same-resolution-device is bigger/smaller (unknown real physical pixel size).

So, why is the "unknown device parameter argument" used for pt but not for px?



Bonus question: why do devices not deliver their physical size information via its device driver to the OS, anyways? A device should know how big it is and how much physical pixels it has, because someone build it with a defined size and pixel density and could just store this in the firmware.

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Murray155

1 Comments

Sorted by latest first Latest Oldest Best

 

@Si4351233

Good question. I hope i got this right, seems like the point (DPI) is more volatile to digital display environment whereas a pixel is more static.

A pt is much older and comes from the print/lithography era. Classically it is 1/72 of an inch. So in that regard an inch is an inch is an inch...except when it goes digital. In order for a device to deduce what an inch is, it must know its own pixel size, pixel density, resolution, physical size, relativity, etc. These are 4-5 factors of "device parameters" that affect the "real world parameter" of an baseline inch in a conditional(ish) manner. A point could equal 20 pixels or a point could equal 200 pixels, depending on device, but it is always equiv to 1/72 of an inch (which is why you often see 72 DPI for digi displays).

A px is more simple: its a pixel without regard for density, inches, etc. This means that there are no "device parameters" to affect the conditional calculation of said px. Retina seems like it would be different, but its not, its still just a pixel in a normal-density alpha transparency layer. The retina layer is separated thang that doesnt really affect the proportions of px itself. So 20 pixels could equal 1 point (dot) or 200 pixels could equal 1 point (dot), but 1 pixel is always equivalent to 1 pixel.

As far as the bonus question goes, I think it comes down to: "Life is unfair, get over it. Just kidding. We werent thinking too hard about the future and didnt expect multi-device resolution to become such a critical thing, therefore didnt worry about coding up good identifiers for developers to use. We have started to get better at this and are now sharing some better data from screens"

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme