Good - the first thing I realize is: It opens 10 bit precision, but - but ARGB, with each color being 10 its, Alpha being 2 bits.
Second thing: GBM is used and when the LinuxRendererGLES starts rendering, it is put into 10 bit mode
Third thing:
Quote:2024-05-01 03:22:31.452 T:690 debug <general>: LinuxRendererGLES::Configure: HDR passthrough: on
2024-05-01 03:22:31.459 T:690 debug <general>: GLES: using scaling method: 18
2024-05-01 03:22:31.459 T:690 debug <general>: GLES: using shader: gles_convolution-6x6.frag
2024-05-01 03:22:31.463 T:690 info <general>: GLES: Selecting multi pass rendering
2024-05-01 03:22:31.463 T:690 debug <general>: GLES: Requested render method: 0
2024-05-01 03:22:31.463 T:690 info <general>: GLES: Selecting YUV 2 RGB shader
2024-05-01 03:22:31.463 T:690 debug <general>: GLES: using shader format: 10
2024-05-01 03:22:31.463 T:690 debug <general>: GLES: using tonemap method: 0
2024-05-01 03:22:31.464 T:690 debug <general>: GLES: using shader format: 10
2024-05-01 03:22:31.464 T:690 debug <general>: GLES: using tonemap method: 0
2024-05-01 03:22:31.466 T:690 debug <general>: CRenderManager::Configure - 4
2024-05-01 03:22:31.467 T:690 warning <general>: CLinuxRendererGLES::UpdateVideoFilter - chosen scaling method 18, is not supported by renderer
2024-05-01 03:22:31.467 T:690 info <general>: GLES: Selecting single pass rendering
Here we see two things: We see a YUV 2 RGB shader. So the Video planes are transformed to RGB here as well. Hopefully this results in "proper" RGB values, where no tone mapping is applied (can be seen above, not the case) but also the inverse linearization that we do to reach proper SRGB opens a blackbox path. Remember what you have on the plane here (and yeah it's the GUI plane) should be "compatible" to the tone mapping your TV is going to do. So our Shader applies the BT2020 conversion, so we have one time processed YUV 2 RGB color transformation.
So in ideal case, you end up with a RGBA plane holding your high precision values. I have no idea what your TV reports ... the hope is though - that the combination of "Light Metadata" + RGBA planes properly output / converted by the intel drm driver ends up in a nice combination.
But I see one point where I was wrong: The visual seems to be "large" enough. For the rest of the chain ... it's still questionable to me if that's color correct enough.
Actually it would be great if VAAPI would be capable to import the decoded frames via dmabuf and gbm directly and the GLES shader would only really be used for the GUI. This video plane approach i don't see yet in the code.
Thanks for posting.