Part I: Project settings
Part II: BT.2020 or P3-D65 Limited?
Part III: HDR Palette, project render settings, generate MaxCLL and MaxFall HDR10 metadata
Part IV: HDR to SDR Conversion LUT for YouTube
Part V: RAW
Part VI: The Grade
Part VII: The State of HDR Film Emulation LUTs & Plugins
Part VIII: Why HDR Production Monitors Matter
Of the 300-odd new items introduced with the release of DaVinci Resolve 17, perhaps the single most groundbreaking feature is the HDR palette – so much so that we thought it worthwhile to devote a separate section of our workflow to familiarize readers with a few of its functions. Whereas the primary color wheels consist of Lift/Gamma/Gain, the HDR palette includes individual controls for black, dark, shadow, light, highlight and specular, as well as a global control, permitting extraordinarily precise manipulation of targeted tonal regions – and as if that weren’t enough, the user can create new custom color wheels; toggle individual wheels on and off; rearrange the position of each of the zones; increase or decrease the falloff; and instantly preview the effect each of these changes has on the image in the viewer.
Working within the zone system of the HDR palette dispenses with the tiresome burden of fussing around with curves and qualifiers. Not having to use as many windows or keyers is huge. It’s worth remarking that with the release of Resolve 17, it is no longer necessary to enable HDR settings on nodes for controls to work intuitively when working in a wide gamut timeline space or when delivering HDR – the HDR palette takes care of this automatically. It is also notable that, unlike the Lift/Gamma/Gain wheels, the HDR palette allows you to increase contrast without raising saturation; decrease contrast without reducing saturation; and boost highlights without incurring extreme levels of saturation in those highlights. Being color space aware, when used in conjunction with RCM, the controls of the HDR palette adapt themselves to the limits of the clip’s image data as mapped from the input color space you’ve assigned to the timeline color space of your project: consequently, the behavior of the controls should feel virtually identical regardless of the type of source clip or timeline color space.
After playing around with the new HDR wheels for just a few hours, returning to the lift, gamma and gain controls of old felt coarse – and we would be surprised if other NLEs did not follow suit in the coming years. Incidentally, the Color Warper’s pretty insane, too!
Project render settings
Here are recommended settings for rendering your project.
You might want to deliver HEVC to save hard drive space.
To save your render settings for future jobs, go to the top right corner of the render settings palette, click on the three dots and choose ‘Save As New Preset’.
Next, give your preset a name and click ‘OK’.
Your preset will now appear to the left of the custom preset.
For best results, render using ProRes 4444 XQ.
ICtCp, or why we can’t have nice things
Here at the Daejeon Chronicles, we’ve tirelessly championed RAW for a long time now, for a multitude of reasons: it responds better to grading than heavily compressed, low bit rate chroma subsampled codecs and is less prone to banding; sharpening, noise reduction and white balance aren’t baked into RAW files, giving more flexibility in post; 12-bit RAW is preferable to 10-bit log, as converting from one 10-bit format to another, such as when creating the SDR version from the HDR master, introduces errors; and in the case of the Sony a7s III, the RAW files have nearly 25% more resolution than the internal codecs, which can come in handy when stabilizing footage or cropping. And lastly, 10-bit log is to be avoided because Y’CbCr chroma subsampling causes hue shifts, saturation issues, luminance errors and unwanted noise in HDR. Because so many affordable cameras can shoot RAW nowadays, it’s no longer the insurmountable hurdle it once was.
However, when it comes to video sharing platforms like YouTube and Vimeo, we’re back to using a variant of Y’CbCr, a color space whose origins can be traced back nearly 70 years, for encoding our video. Practically all consumer video, whether streamed, broadcast or distributed on UHD Blu-ray, relies on 4:2:2 or 4:2:0 subsampling to reduce bandwidth, but it’s not so much the subsampling as the fact that it’s accomplished in a way that is destructive to HDR, ensuring that the viewer will absolutely not be seeing the video as intended, that is reprehensible. The screenshot above gives an indication of how the color transformation of Y’CbCr of the HEVC rendered clip distorts color and luminance information. ICtCp is vastly superior to Y’CbCr, it’s compatible with HEVC, all Dolby Vision enabled televisions and most displays can decode it, but currently it’s only available on Disney+, Apple TV+, Netflix and Amazon. In an ideal world, the entire imaging pipeline, from the camera to the screen, whether high end or budget production, would be able to take advantage of ICtCp, but we don’t expect to see that happening anytime soon.
Generate MaxCLL and MaxFALL HDR10 Metadata in DaVinci Resolve
As consumer HDR displays have differing peak luminance and contrast capabilities that are not infrequently less than the mastering display, it’s necessary to map PQ content to match the capabilities of the target consumer display device. This scaling or mapping is controlled and managed by metadata. According to YouTube’s help page,
“HDR videos must have HDR metadata in the codec or container to be played back properly on YouTube. Once a video has been properly marked as HDR, uploading it follows the usual process of uploading a video. YouTube will detect the HDR metadata and process it, producing HDR transcodes for HDR devices and an SDR downconversion for other devices.
To be processed, HDR videos must be tagged with the correct:
- Transfer function (PQ or HLG)
- Color primaries (Rec. 2020)
- Matrix (Rec. 2020 non-constant luminance)
HDR videos using PQ signaling should also contain information about the display it was mastered on (SMPTE ST 2086 mastering metadata). It should also have details about the brightness (CEA 861-3 MaxFALL and MaxCLL). If it’s missing, we use the values for the Sony BVM-X300 mastering display. Optionally, HDR videos may contain dynamic (HDR10+) metadata as ITU-T T.35 terminal codes or as SEI headers”.
DaVinci Resolve is able to generate this metadata for us. After you’ve finished grading your video, you can go ahead and add MaxCLL (Maximum Content Light Level) and MaxFALL (Maximum Frame Average Light Level) HDR10 metadata. MaxFALL expresses the value in nits, for the frame in your project with the highest average brightness level, while MaxCLL is the value, expressed in nits, of the brightest pixel component in the project. In Project Settings on the Color Management page, check Enable HDR10+.
Next, on the color page, select the Color tab > HDR10+ > Analyze All Shots.
On the Deliver page, check Embed HDR10 Metadata.
If you check Export HDR10 Metadata, a document will appear alongside the video file. In order to open it, change the suffix to mov.txt. The document will look something like so:
The metadata includes information about chroma subsampling and bit depth (4:2:2 10-bit), color primaries (BT.2020), transfer matrix (PQ), matrix coefficient (BT.2020 non-constant), mastering display color primaries (P3), mastering display luminance (min: 0.0001 cd/m2, max: 1,000 cd/m2), MaxCLL and MaxFALL.
And lastly, this PVC podcast about DaVinci Resolve 17 with guest Alexis Van Hurkman is pretty much required listening.
With this kind of export, you don’t have difference between the generated file you play on your TV (with a USB key directly plugged in) and the file played with Youtube app?
Maybe! The file sizes are huge! 😂 I haven’t tested any of the other codecs yet.
When I use these export settings my file is still at 4:2:0 chroma subsampling. Is there a way to export a 10-bit h265 at 4:2:2?
Blackmagic DaVinci Resolve 17.4.4 update added support for encoding H.265 Main 10 4:2:2 on Apple silicon. If you’re still not getting 4:2:2, I’d recommend using one of the flavors of ProRes.
Im not on Apple silicon (intel Mac), so maybe that’s the issue? I can get 4:2:2 on ProRes no problem; just trying to figure out why it drops to 4:2:0 on H.265. I tried exporting a ProRes master at 4:4:4 and using Compressor to render an H.265 off of that, but it came out as 10-bit 4:2:0, as well.
It does trigger HDR on YouTube and Vimeo though, and you can tell the difference.
How do you know what the subsampling of the clips is?
I opened it with an app called Invisor. It gives a technical breakdown of whatever file is opened with it. It’ll tell you the codec, chroma subsampling, bit rate, color space and gamma, etc…
I opened my latest video edited and rendered in Resolve and it’s HEVC 4:2:0 10-bit according to Invisor. Now I see there are three options for encoding: Main (8-bit), Main10 (10-bit 4:2:0) and Main 4:2:2 10. I went ahead and changed the recommendations in the post. Thanks much, Ethan!
How does the MaxFALL and MaxCLL metadata items get set for live HDR previews? Where you can’t calculate anything. Would you just assume the X300 settings as you mention?
I’m sorry, Sam, but I don’t know what a live HDR preview is.
Sorry – to clarify: you are able to calculate MaxFALL and MaxCLL because you have the content already. If you are monitoring a live image in HDR (and this is not resolve related so apologies it’s off topic) where you don’t have fixed content to calculate from, how would you set these values/
I really don’t know! Those values are usually measured off the finished content after grading.
Thanks for your response. I’ll keep digging!