Most HDR is Kinda Bullcrap...

Discussion in 'Accessories' started by hmscott, Jun 7, 2018.

  1. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,569
    Messages:
    15,958
    Likes Received:
    19,638
    Trophy Points:
    931
    Most HDR is Kinda Bullcrap...
    Linus Tech Tips
    Published on Jun 7, 2018
     
  2. RampantGorilla

    RampantGorilla Notebook Evangelist

    Reputations:
    58
    Messages:
    435
    Likes Received:
    181
    Trophy Points:
    56
  3. Dennismungai

    Dennismungai Notebook Consultant

    Reputations:
    102
    Messages:
    230
    Likes Received:
    166
    Trophy Points:
    56
    So its' this Chroma subsampling nonsense that makes non-HDR content look like **** when HDR is enabled in Windows.

    Oh, and btw: If you're planning to enable HDR mode on a 4k panel with an off the shelf HDMI cable over 2 meters long, you're in for a surprise. A nasty handshaking error surprise.

    Are there any 'active' HDMI cables over 2m length that can achieve this task?
     
    hmscott likes this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,569
    Messages:
    15,958
    Likes Received:
    19,638
    Trophy Points:
    931
    AMD FreeSync Updates: TVs, HDR support, Q&A!
    PC Perspective
    Published on Jun 19, 2018
    Interested in new gaming displays? Interested in new gaming displays that can also do HDR? Then you are going to want to watch our live stream with AMD about its plans for the future of FreeSync. AMD will be discusses changes to FreeSync at our event, with maybe an additional surprise or two along the way.
     
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,569
    Messages:
    15,958
    Likes Received:
    19,638
    Trophy Points:
    931
    Finally caught up to this in a video news segment description which was summarized in a way that caught my attention, even though I came for the AMD news segments:

    Video news coverage describing the issues:
    11:15 - Fake 4K 144Hz HDR G-Sync Monitors


    ...and it answers for me why these were announced a year ago and are only now getting released, as they can't meet the spec due to G-sync limitations and DisplayPort 1.4 bandwidth limitations.

    Reading the text coverage at the beginning of the Reddit link didn't make the problem gel quite as clear as quickly - at the time a TL;DR would have helped greatly - I might have even found time to read the rest given a clear destination for the prose, even though technically it walked through the details it took forever and another Edit to get to the root cause - G-sync sucks :)
    https://www.reddit.com/r/Monitors/c..._144_hz_monitors_use_chroma_subsampling_for/[
    I'm seeing a lot of user reviews for the new 4K 144 Hz monitors, and it seems like everyone mentions that it looks noticeably worse at 144 Hz. I keep expecting these posts to say "due to the 4:2:2 chroma subsamping", but instead they say "I'm not sure why" or something like that, both on here and on various forums. It seems monitor companies have done their usual good job of "forgetting" to inform people of this limitation, as most of the early adopters are apparently unaware that it is not actually capable of full 4K 144 Hz even though the subsampling was mentioned in the Anandtech article a month or two ago. In any case, I want to make people aware of what chroma subsampling is, and that these first-gen 4K 144 Hz monitors use it.

    Chroma Subsampling

    Chroma subsampling is a method of reducing bandwidth by partially lowering the resolution of the image.

    Imagine you have a 4K image; 3840 × 2160 pixels. Each pixel is composed of a RED value between 0–255, a GREEN value 0–255, and a BLUE value 0–255. You could imagine this 3840 × 2160 full color image as three separate monochrome images; a 3840 × 2160 grid of RED values, one of GREEN values, and another of BLUE values, which are overlaid on each other to make the final image.

    Now, imagine that you reduce the resolution of the RED and GREEN images to 1920 × 1080, and when you reconstruct the full image you do it as if you were upscaling a 1080p image on a 4K screen (with nearest neighbor scaling); use each 1080p pixel value for a square of 4 pixels on the 4K screen. This upscaling is only done for the RED and GREEN values; the BLUE image is still at full resolution so BLUE has a unique value for every 4K pixel.

    This is the basic principle behind chroma subsampling. Reducing resolution on some of the pixel components, but not all of them.

    The description above, of reducing resolution by half in both the vertical and horizontal resolution, on 2 of the 3 components, is analogous to 4:2:0 chroma subsampling. This reduces bandwidth by one half (One channel at full resolution, and 2 channels at one-quarter resolution = same number of samples as 1.5 out of 3 full-resolution channels)

    Full resolution on all components is known as "4:4:4" or non-subsampled. Generally it's best to avoid calling it "4:4:4 subsampling", because it sounds like you're saying "uncompressed compression". 4:4:4 means no subsampling is being used.

    4:2:2 subsampling is cutting the resolution in half in only one direction (i.e. 1920 × 2160; horizontal reduction, but full vertical resolution) on 2 out of the 3 components. This reduces the bandwidth by one third.

    YCbCr

    Above, I used subsampling RGB components only as an example; "RGB subsampling" is pretty terrible and is generally not used in computer systems (it has been implemented in hardware in Samsung's PenTile phone displays, but other than that, it's not very common). In an RGB system, since each of the 3 components dictates the brightness of one of the primary colors, changing one of the RGB values affects both the hue and brightness of the resulting total color. Therefore, using one R, G, or B value on a neighboring pixel makes a very noticeable change, so subsampling would degrade the image by quite a lot.

    Instead, subsampling is generally only used in combination with YCbCr. YCbCr is a different method of specifying colors, used as an alternative to RGB for transmission. Of course, physically speaking, every display generates an image using discrete red green and blue elements, so eventually every image will need to be converted to the RGB format in order to be displayed, but for transmission, YCbCr has some useful properties.

    What is YCbCr Anyway?

    People get confused about what YCbCr actually is; misuse of terminology all over the place adds to the confusion, with people incorrectly calling it a "color space" or "color model" or things like that*. Generally, it is referred to as a "pixel encoding format" or just "pixel format". It is just a method for specifying colors. Really, it is an offshoot of the RGB system, it is literally just RGB with a different axis system. Imagine a two-dimensional cartesian (X-Y) coordinate system, then imagine drawing a new set of axes diagonally, at 45º angles to the standard set, and specifying coordinates using those axes instead of the standard set. That is basically what YCbCr is, except in 3 dimensions instead of 2.

    If you draw the R, G, and B axes as a standard 3D axis set, then just draw 3 new axes at 45º-45º angles to the original, and there you have your Y, Cb, and Cr axes. It is just a different coordinate system, but specifies the same thing as the RGB system.

    You can see how the YCbCr axes compare to the familiar "RGB cube" formed by the RGB axis set (RGB axes themselves not shown, unfortunately):https://upload.wikimedia.org/wikipedia/commons/b/b8/YCbCr.GIF

    (*EDIT: the term "color space" is very loosely defined and has several usages. By some definitions, YCbCr could be considered a "color space", so it is not strictly speaking "incorrect" to call it that. However, in the context of displays, the term "color space" generally refers to some specific standard defining a set of primary color chromaticity coordinates/gamut boundaries, white point, among other things, like the sRGB or AdobeRGB standards. YCbCr is not a "color space" by that definition.)

    Why Even Use YCbCr?

    YCbCr is useful because it specifies brightness and color separately. Notice in the image from the previous section, the Y axis (called the "luma" component) goes straight down the path of equal-RGB values (greys), from black to white. The Cb and Cr values (the "chroma" components) specify the position perpendicular to the Y axis, which is a plane of equal-brightness colors. This effectively makes 1 component for brightness, and 2 components for specifying the hue/color relative to that brightness, whereas in RGB the brightness and hue are both intertwined in the values of all 3 color channels.

    This means you can do cool things like remove the chroma components entirely, and be left with a greyscale version of the image; this is how color television was first rolled out, by transmitting in YCbCr*. Any black-and-white televisions could still receive the exact same broadcast, they would simply ignore the Cb and Cr components from the signal. (*EDIT: I use "YCbCr" here as a general term for luminance-chrominance based coordinate systems (YUV systems). Some people use "YCbCr" exclusively to refer to the digital YUV systems we use today, and use the term "YPbPr" for the analog equivalent, but it's all the same concept).

    Subsampling components also works much better in YCbCr, because the human eye is much less sensitive to changes in color than it is to changes in brightness. Therefore you can subsample the chroma components without touching the luma component, and reduce the color resolution without affecting the brightness of each pixel, which doesn't look much different to our eyes. Therefore, YCbCr chroma subsampling (perceptually) affects the image much less than subsampling RGB components directly would be. When converted back to RGB of course, every pixel will still have a unique RGB value, but it won't be quite the same as it would be if the YCbCr chroma subsampling had not been applied.

    Terminology Notes

    Since RGB-format images don't have luma or chroma components, you can't have "chroma subsampling" on an RGB image, since there are no chroma values for you to subsample in the first place. Terms like "RGB 4:4:4" are redundant/nonsensical. RGB format is always full resolution in all channels, which is equivalent or better than YCbCr 4:4:4. You can just call it RGB, RGB is always "4:4:4".

    Also, chroma subsampling is not a form of compression, because it doesn't involve any de-compression on the receiving side to recover any of the data. It is simply gone. 4:2:2 removes half the color information from the image, and 4:2:0 removes 3/4 of it, and you don't get any of it back. The information is simply removed, and that's all there is to it. So please don't refer to it as "4:2:2 compression" or "compressed using chroma subsampling" or things like that, it's no more a form of compression than simply reducing resolution from 4K to 1080p is; that isn't compression, that's just reducing the resolution. By the same token, 4:2:2 isn't compression, it's just subsampling (reducing the resolution on 2/3 of the components).

    Effects of Chroma Subsampling

    Chroma subsampling reduces image quality. Since chroma subsampling is, in effect, a partial reduction in resolution, its effects are in line with what you might expect from that. Most notably, fine text can be affected significantly, so chroma subsampling is generally considered unacceptable for desktop use. Hence, it is practically never used for computers; many monitors don't even support chroma subsampling.

    The reduction in quality tends to be much less noticeable in natural images (i.e. excluding test images specifically designed to exploit subsampling). 4:2:2 chroma subsampling is standard for pretty much all cinema content; most broadcast, streaming, and disc content (blu-ray/DVD) uses YCbCr 4:2:0 subsampling since it reduces the bandwidth for both transmission and storage significantly. Games are typically rendered in RGB, and aren't subsampled. The effects of 4:2:2 subsampling probably won't be that noticeable in games, but it certainly will be on the desktop, and switching back and forth every time you want to turn on 144 Hz for games, then turning it back down to something lower so you can use full RGB on the desktop, would be quite a pain.

    Interface Limitations - Why No Support for 4K 144 Hz RGB?

    Chroma subsampling has started seeing implementation on computers in situations where bandwidth is insufficient for full resolution. The first notable example of this was NVIDIA adding 4K 60 Hz support to its HDMI 1.4 graphics cards (Kepler and Maxwell 1.0). Normally, HDMI 1.4 is only capable of around 30 Hz at 4K, but with 4:2:0 subsampling (which reduced bandwidth by half), double the framerate can be achieved within the same bandwidth constraints, at the cost of image quality.

    Now, we're seeing it in these 4K 144 Hz monitors. With full RGB or YCbCr 4:4:4 color, DisplayPort 1.4 provides enough bandwidth for up to 120 Hz at 4K (3840 × 2160) with 8 bpc color depth, or up to around 100 Hz at 4K with 10 bpc color depth (exact limits depend on the timing format, which depends on the specific hardware; in these particular monitors, they apparently cap at 98 Hz at 4K 10 bpc). These monitors claim to support 4K 144 Hz with 10 bpc color depth, so some form of bandwidth reduction must be used, which in this case is YCbCr 4:2:2.

    Before anyone mentions HDMI 2.1, it's not possible to implement HDMI 2.1 yet. Only the master specification has been released. I know a lot of people seem to think that when the specification is released, we'll start seeing products any day now, but that's not the case at all. The specification is the document that tells you how to build an HDMI 2.1 device; the release of that document is when engineers start designing silicon that is capable of that, let alone displays that use that silicon. The DisplayPort 1.4 standard was released in the early 2016, over 2 years ago, and we're only just now starting to see it implemented in monitors (I believe it has been implemented on only 1 monitor prior to this, the Dell UP3218K). Also, there are no graphics cards with HDMI 2.1 yet, so it wouldn't help much right now on a monitor anyway.

    The HDMI 2.1 compliance test specification isn't even finished being written yet, so even if you had HDMI 2.1 silicon ready somehow, there's currently no way to have it certified, as the testing procedures haven't been released by the HDMI Forum yet. HDMI 2.1 is still under development from a consumer perspective. The release of the main specification is only a release for engineers.

    DSC Compression - The Missed Opportunity

    The creators of these monitors could have opted to use Display Stream Compression (DSC), which isa form of compression, unlike subsampling; it reduces bandwidth, and the image is reconstructed on the receiving side. DSC is part of the DisplayPort 1.4 standard, but Acer/ASUS chose not to implement it, likely for hardware availability reasons; presumably no one has produced display controllers that support DSC, and Acer/ASUS wanted to rush to get the product out rather than implement 4K 144 Hz properly. Note that DP 1.4 supports up to 4K 120 Hz uncompressed and non-subsampled; they could have simply released it as a 4K 120 Hz monitor with no tricks, but that sweet 144 Hz number was calling to them I guess. They probably feel marketing a "120 Hz" monitor would seem outdated, and don't want to be outdone by competition. Such is life in this industry... Still, they can be run at 120 Hz non-subsampled if you want, no capability has been lost by adding subsampling. Just that people are not getting what they expected due to the unfortunate lack of transparency about the limitations of the product.

    EDIT: I forgot that these are G-Sync monitors.This is most likely why the monitor manufacturers did not support proper 4K 144 Hz using DSC, dual cables, or some other solution. When you make a G-Sync display, you have no choice but to use the NVIDIA G-Sync module as the main display controller instead of whatever else is available on the market. This means you are forced to support only the features that the G-Sync module has. There are several versions of the G-Sync module (these monitors use a new one, with DisplayPort 1.4 support), but G-Sync has historically always been way behind on interface support and very barebones in feature support, so come to think of it I highly doubt that the new G-Sync module supports DSC, or PbP/MST (for dual cable solutions).

    If this is the case, it's more the fault of NVIDIA for providing an inadequate controller to the market, than the monitor manufacturers for "choosing" to use chroma subsampling (it would be the only way of achieving 144 Hz in that case). However it is still on them for not simply releasing it as a 4K 120 Hz display, or being clear about the chroma subsampling used for 144 Hz. Anyway, we'll have to wait and see what they do when they release FreeSync or No-sync 4K 144 Hz monitors, where NVIDIA's limitations don't apply.

    UPDATE: AMD_Robert has replied that current AMD Radeon graphics cards themselves do not support DSC. No official word on whether NVIDIA graphics cards support DSC. If not, then it certainly makes more sense why display manufacturers are not using it. In that case, the only way to support 4K 144 Hz RGB would be via a dual-cable PbP solution.

    DSC Concerns

    Before anyone says "meh we don't want DSC anyway", I'll answer the two reservations I anticipate people will have.
    1. DSC is a lossy form of compression. While it is true that DSC is not mathematically lossless, it is much much better than chroma subsampling since it recovers almost all of the original image. Considering that in natural images most people don't even notice 4:2:2 subsampling, image quality reduction with DSC is not going to be noticeable. The only question is how it performs with text, which remains to be seen since no one has implemented it. Presumably it will handle a lot better than subsampling.

    2. Latency. "Compression will add tons of lag!". According to VESA, DSC adds no more than 1 raster scan line of latency. Displays are refreshed one line at a time, rather than all at once; on a 4K display, the monitor refreshes 2160 lines of pixels in a single refresh. At 144 Hz, each full refresh is performed over the course of 6.944 ms, therefore each individual line takes around 3.2 microseconds (0.0032 ms), actually less than that due to blanking intervals, but that's a whole different topic :p https://www.displayport.org/faq/#tab-display-stream-compression-dsc
    How does VESA’s DSC Standard compare to other image compression standards?

    Compared to other image compression standards such as JPEG or AVC, etc., DSC achieves visually lossless compression quality at a low compression ratio by using a much simpler codec (coder/decoder) circuit. The typical compression ratio of DSC range from 1:1 to about 3:1 which offers significant benefit in interface data rate reduction. DSC is designed specifically to compress any content type at low compression with excellent results. The simple decoder (typically less than 100k gates) takes very little chip area, which minimizes implementation cost and device power use, and adds no more than one raster scan line (less than 8 usec in a 4K @ 60Hz system) to the display’s throughput latency, an unnoticeable delay for interactive applications.

    Conclusion

    I know the internet loves to jump on any chance to rant about corporate deceptions, so I suppose now it's time to sit back and watch the philosophical discussions go... Is converting to YCbCr, reducing the resolution to 1920 × 2160 in 2 out of 3 components, and converting back to RGB really still considered 4K?

    Then again, a lot of people are still stuck all the way back at considering anything other than 4096 × 2160 to be "4K" at all :p (hint: the whole "true 4K is 4096 × 2160" was just made up by uninformed consumer journalists when they were scrambling to write the first "4K AND UHD EXPLAINED" article back when 4K TVs were first coming out; in the cinema industry where the term originated from, "4K" is and always has been a generic term referring to any format ≈4000 pixels wide; somehow people have latched onto the "true 4K" notion and defend it like religion though... But anyway, getting off topic :3)

    These 4K 144 Hz monitors use YCbCr 4:2:2 chroma subsampling to reach 4K 144 Hz. If you want RGB or YCbCr 4:4:4 color, the best you can do on these is 4K 120 Hz with 8 bpc color depth, or 4K 98 Hz with 10 bpc color depth (HDR).

    Like I said, in natural images, chroma subsampling doesn't have much of an impact, so I expect most people will have a hard time noticing any significant reduction in image quality in games. However, it will be quite the eyesore on the desktop, and most people will probably want to lower the refresh rate so that you can use the desktop in RGB. And switching back and forth between pixel formats/refresh rates every time you open and close a game is going to get old pretty fast. Personally, I'd probably just run it at 120 Hz 8 bpc RGB all the time, that's perfectly acceptable to me. It's just unfortunate they opted to use subsampling as opposed to DSC to get the 144 Hz.

    Anyway, this is just a friendly PSA, so hopefully fewer people will be caught off guard by this. If you're going to buy one of these 4K 144 Hz monitors, then just be aware that desktop and text will have degraded image quality when operating at 4K above 120 Hz (8 bpc/SDR) or 98 Hz (10 bpc/HDR). Or if you already have one of these monitors and are wondering why text looks bad at 144 Hz, that's why.
     
  6. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,569
    Messages:
    15,958
    Likes Received:
    19,638
    Trophy Points:
    931
    ASUS and Acer UHD G-Sync HDR Monitors Forced to Use Color Compression at 120/144 Hz
    by Hilbert Hagedoorn on: 06/18/2018 03:46 PM
    http://www.guru3d.com/news-story/as...ed-to-use-color-compression-at-120144-hz.html

    "We've mentioned the new Ultra HD G-Sync HDR ACER and ASUS monitors a couple of times already. Over the weekend some reported the ACER one got in the news due to a loud ventilator, today more news reaches the web, in high-refresh-rate modes, the displays fall back to color compression.

    A few early adopters of these HDR, local dimming monster monitors noticed and reported on Reddit that when using a high refresh rate, the image quality dropped significantly. The story now is that the ASUS and Acer screens make use of color compression at 120 and 144 Hz, not because the panel couldn't handle it, but the main limitation is signal bandwidth over DisplayPort 1.4. This also means you pretty much need to run your Windows desktop at 60 Hz for a bit of a quality readable view.

    DisplayPort 1.4 has too little bandwidth available to drive 4k, 144 Hz without compression. To bypass that, the screen monitor signal reverts to 4:2:2 chroma subsampling. basically your brightness information will remain intact, however, the color information will be based on half the resolution, 1920 x 2160 pixels. All is good up to 98 Hz, after that, it's 4:2:2 chroma subsampling ... on your 2500 Euro / 2000 USD Screen. Lovely. There's no real solution for this, other than new display connectors and graphics cards that do support such high bandwidth connections - HDMI 2.1. "

    ASUS ROG Swift PG27UQ 27" 4K 144Hz G-SYNC Monitor: True HDR Arrives on the Desktop

    Author: Ken Addison, Date: June 22, 2018
    https://www.pcper.com/reviews/Graph...144Hz-G-SYNC-Monitor-True-HDR-Arrives-Desktop

    "Originally demonstrated at CES 2017, the ASUS ROG Swift PG27UQ debuted alongside the Acer Predator X27 as the world's first G-SYNC displays supporting HDR. With promised brightness levels of 1000 nits, G-SYNC HDR was a surprising and aggressive announcement considering that HDR was just starting to pick up steam on TVs, and was unheard of for PC monitors. On top of the HDR support, these monitors were the first announced displays sporting a 144Hz refresh rate at 4K, due to their DisplayPort 1.4 connections.

    However, delays lead to the PG27UQ being displayed yet again at CES this year, with a promised release date of Q1 2018. Even more slippages in release lead us to today, where the ASUS PG27UQ is available for pre-order for a staggering $2,000 and set to ship at some point this month.

    In some ways, the launch of the PG27UQ very much mirrors the launch of the original G-SYNC display, the ROG Swift PG278Q. Both displays represented the launch of an oft waited technology, in a 27" form factor, and were seen as extremely expensive at their time of release..."

    And, then the truth comes out...

    "Edit: For clarification, the 98Hz limit represents the refresh rate at which the monitor switches from 4:4:4 chroma subsampling to 4:2:2 subsampling. While this shouldn't affect things such as games and movies to a noticeable extent, 4:2:2 can result in blurry or difficult to read text in certain scenarios. For more information on Chroma Subsampling, see this great article over atRtings.

    For the moment, given the lack of available GPU products to push games above 98Hz at 4K, we feel like keeping the monitor in 98Hz mode is a good compromise. However, for a display this expensive, it's a negative that may this display age faster than expected.

    One caveat though, with HDR enabled at 4K, the maximum refresh rate is limited to 98Hz. While this isn't a problem for today, where graphics cards can barely hit 60 FPS at 4K, it is certainly something to be aware of when buying a $2,000 product that you would expect to last for quite a long time to come..."
     
    Last edited: Jun 23, 2018 at 2:08 AM
Loading...
Similar Threads - Kinda Bullcrap
  1. Davidpriddy
    Replies:
    6
    Views:
    828

Share This Page