Part I: Project settings
Part II: BT.2020 or P3-D65 Limited?
Part III: HDR Palette, project render settings, generate MaxCLL and MaxFall HDR10 metadata
Part IV: HDR to SDR Conversion LUT for YouTube
Part V: RAW
Part VI: The Grade
Part VII: The State of HDR Film Emulation LUTs & Plugins
Part VIII: Why HDR Production Monitors Matter
Stuff we’re doing differently now from in the past:
- During shooting, avoiding yellow in the false color guide of the Ninja V entirely except for specular highlights (aside from the infrequent occasions when we purposely want to blow out highlights)
- Using the simplest tools for the job
- No longer using the X-Rite ColorChecker in post
- Switching to S-Gamut3.Cine from S-Gamut3
- Scrupulously applying a curve to each and every project
- Focusing more on color separation and keeping tonal contrast under control
- Labeling each and every node
- Split-toning by un-ganging the custom curve and pushing cyan into the shadows and reds into the highlights
- Applying masking tools photographically rather than technically (for example, rather than stupidly slapping on a vignette or power window to draw attention to the subject or their face, using the tools in Resolve to augment the lighting that’s already there)
- Exporting projects using Prores 4444 XQ rather than HEVC
Following these precepts, the extent of our adjustments has actually decreased with the result that we’re less likely to push the image to the breaking point; selecting the simplest tool for the job, for example, Offset rather than the eyedropper tool and Hue vs. Hue, reduces the chances of introducing unwanted artifacts; lower contrast means not having to rely so much on power windows all over the place; split-toning and nudging yellows and reds toward orange and green toward teal adds color separation and applying grain and blur minimizes the objectionable video look. Split-toning is a characteristic of traditional photochemical print stock responsible for much of its charm and one of the traits that makes celluloid so attractive to filmmakers. No longer are we using the X-Rite ColorChecker (apart from white balancing the shot before recording) because printed color charts are only accurate for HD colorimetry and are absolutely useless for HDR WCG.
Natural daylight coming in from window, no fill light or reflector used. Exposure was very hot. Photo is of Ninja V display.
Ninja V false color. Red represents clipped highlights, yellow represents the brightest highlights with texture and detail.
You might want to create a new node at the beginning of your node tree and knock down saturation of highlights and shadows a smidgen. Authorities insist shadows must be neutralized, but you should decide for yourself what works best.
The extended brightness of HDR is not to increase the overall light level of the entire picture but to allow headroom for specular highlights. The APL (average picture level) of HDR movies is usually the same or even often less than SDR content, which is why we don’t need to jump for the remote to adjust brightness when switching between SDR and HDR content on Netflix or YouTube. A picture that is too bright will cause eye strain in the viewer, especially since HDR content is intended to be watched in a dark viewing environment. Making the entire image brighter would be perceived as being of poor quality and would also risk triggering the ABL (auto brightness limiter) feature on OLED displays whose average picture level is < 20% of peak brightness. Unlike SDR, PQ HDR is an absolute standard, meaning that it’s not possible to raise brightness to accommodate ambient lighting. In addition to the technical limitations, there is also the aesthetic element – as you raise the APL, there is less and less headroom for specular highlights and they’re no longer impactful. This is why you’ll often see diffuse whites in dramas lower than the recommended 203 nits. Exceptions to the rule are when filmmakers use the extended brightness of HDR when a character goes from a dimly lit interior to the sunny outdoors, a technique that is not used nearly often enough; or to intentionally create discomfort in the viewer (e.g. flashing strobe lights). You can witness for just how much of a difference leaving headroom for specular highlights makes – just gradually increase the brightness of any one of your HDR videos and you’ll instantly see the impact of those specular highlights vanish before your very eyes.
The #1 factor to consider prior to grading
When asked if he had any tips for those undertaking Dolby Vision HDR grading for the very first time, Siggy Firstl, senior colorist at Company 3, gives the following piece of advice:
“Understand the look [that] the filmmakers are wanting and establish early on the sort of bright[ness] levels, how far you want to push the highlights… That’s probably […] the number one thing to establish before you even start doing the creative color… That will change the look of the show and also, if you say for instance go too bright in establishing that light and then you get into coloring, it can put a lot of extra work on you if you’re having to sort of force the highlights down the whole time.”
As a rule of thumb, the more contrasty the image, the less saturation you’ll need in the image overall. Another general principal is that highly saturated colors should not be too bright.
While the extended color volume of HDR results in perceptibly more vibrant colors, it pays to be cautious with saturation. Glowing skin and radioactive foliage are indications that saturation is cranked up too high. To keep an eye on saturation, use the vectorscope. In order to see the highlight and shadow excursions, enable the “extents” option of your scopes. Extents create an outline highlighting all graph excursions to show you the true level of all overshoots and undershoots in the video signal.
If you’ve got good color separation, the signal mass in the vectorscope should be straddling at least two quadrants while preserving skin tones. Instead of using the saturation knob to increase saturation, how about trying out this method instead:
The customizable zones of the HDR Palette and in DaVinci Resolve allow the targeting of specific tonal ranges in the image. You’ve still got to watch the waveform monitor when making adjustments though, since even when it looks like an area has been isolated in the viewer, the actual coverage might be greater, making it necessary to use a power window.
The overwhelming majority of online tutorials recommend making an S-curve, but they fail to make the crucial distinction between footage that has been underexposed, exposed for middle gray, or overexposed. Properly exposed S-Log3 should be exposed to the right by around 1.66 stops – and the correct way to shape the curve is to grab it somewhere around the mid-point and gently drag it downward like so.
On the other hand, if the footage has been underexposed, you’ll be grabbing the curve toward the left side and gradually lifting upward to form a curve in the opposite direction of the overexposed clip above.
Creating color separation with split-toning
Split-toning is a characteristic of film stocks where you’ll see cool shadows and warm highlights adding color contrast to the image. This is accomplished by un-ganging the channels and adding a few pixels of blue and green to the shadows, a tiny bit of red and green to the highlights. Once you’ve made your adjustments to the curve, you can finesse them with the sliders to the right. Just remember, 50 represents zero – for example, moving the red slider to the left of 50 begins to add cyan.
It may seem trivial, but split-toning is the single most important ingredient toward creating a look; and the image will be perceived as more colorful and less video-ish than dialing in more saturation. The look can be further refined by nudging yellow a bit toward orange, green toward teal-green and red toward orange in Hue vs. Hue; adjusting the saturation of each color in the Hue vs. Sat curves; and reducing saturation in the upper midtones and highlights in the Sat vs. Sat curves.
Preserving middle gray when creating a look is vital. Cullen Kelly’s free chart includes 14 different tone curves, including ACES, Arri, Slog3, Log3G10 and DaVinci Intermediate.
When mastering in PQ (ST 2084), much of the signal range is devoted to shadow detail. Noise in darker image regions is visually masked by highlights in the image. You can witness this for yourself by covering the highlights with one hand while looking at the shadow areas of your video displayed on the monitor. Apparently, tone mappers, which compress the tonal range of HDR video to the capabilities of the end-user display, also strongly emphasize image noise, particularly in the shadows. YouTube’s processing removes some noise to achieve streaming bitrates. In order to exercise more control over the final image, you may want to denoise your video prior to rendering it for upload. Many YouTube tutorials improperly recommend enlarging the image 999%, indiscriminately blasting luma and chroma noise with heavy amounts of noise reduction, followed by tossing in hideous amounts of sharpening, destroying true detail and making the picture look like cheap camcorder footage. We suggest instead using noise reduction sparingly and adding as little sharpening as possible while being on the lookout for undesirable artifacts.
To check noise reduction in DaVinci Resolve Studio, use the highlighter in A/B mode. It may take a few moments to kick in, depending on your machine. If you start to see outlines of the subject, actual detail in the image is being affected and noise reduction should be reduced. Additionally, we suggest examining the pores of the talent’s skin for excessive smoothing, banding in large areas of uniform color with fine gradients, like walls and skies, as well as being on the lookout for jaggies or sawtooth effect, an objectionable artifact that makes the smooth outlines of objects resemble a series of staircases. In general, you’ll want to preserve some noise to prevent banding.
Rather than applying noise reduction to the entire clip, you might consider just hitting the darker regions of the image where noise is most bothersome:
Grain adds texture to an otherwise squeaky clean, sterile digital image and, as HDR is rather unforgiving, is all but indispensible for hiding imperfections in complexions, makeup, graphics, visual effects and prosthetics. Adding a moderate amount of grain can help hide banding when uploading to video sharing platforms. Another seldom discussed aspect of grain is that it’s in constant motion, breathing life into each and every frame. At the same time, the aggressive compression algorithms of video sharing platforms like YouTube destroy high frequency detail, turning the voluptuous grain seen in the grading suite into unsightly macroblocking, so you’ll have to decide for yourself whether it might be preferable to not add grain to projects at all. If you are using a LUT, like Cullen Kelly’s Kodak 2383 PFE LUT for example, be sure to add your grain before, not after, the LUT.
Download a comparison between Dehancer 5.3 film grain and DaVinci Resolve grain here. It’s well-nigh impossible to see how the plug-in compares to Resolve on something like the MacBook Pro Liquid Retina XDR mini-LED, which is why we recommend throwing the clip on the timeline of your favorite NLE, setting it up for HDR and viewing on an external UHD monitor or television set.
During an appearance on Cullen Kelly’s Grade School, the brilliant colorist Jill Bogdanowicz revealed a secret to accentuating texture without it looking over-processed. While working on Joker, the colorist used Live Grain – which separates out the red, green and blue channels, creating grain that resembles scanned film – to accentuate texture in the cooler, darker backgrounds while de-emphasizing grain in the warmer, red tones of the talent’s skin. One way to accomplish this in DaVinci Resolve is to create a layer mixer beneath the grain node, open up the HSL Qualifier, switch off luma and saturation, and, using the highlighter tool to see the effect in the viewer, adjust hue to isolate the skin tones. Afterward, apply clean white, clean black and blur radius to tidy things up. Since we don’t want the skin to be completely free of grain, we add a keyer to the layer node to restore some texture to the talent’s skin. Click here to download an example (HDR) of this powerful technique.
An even easier method to de-emphasize grain on the subject is to use the new Depth Map in DaVinci Resolve Studio 18. Once you’ve isolated the subject, use the softness slider to add a bit of grain back into the talent’s skin. To see the improvement, click here to download sample footage.
Readers might also be interested in investigating some of the print film emulation plugins that are becoming more widely available.
Shift white point
You can shift your white point from D65 to D60 using the Chromatic Adaptation plugin in Resolve OFX. Place it just before your final transform.
How to modify LUTs
Cullen Kelly’s Kodak 2383 PFE LUT works its magic no matter what footage you’re working with, however, the look is pretty bold, particularly the contrast, so you might want to scale it back a smidgen. Fortunately for us, Cullen released a video demonstrating how to finesse color and contrast independently of each other so you can achieve precisely the look you’re after.
Concerning an often asked question regarding the appearance of scopes in HDR10, one significant difference is that in SDR, the signal can ordinarily fill out the scopes, say, from 0-1023, while in HDR PQ, the bulk of the signal will usually be bunched up toward the bottom end of the waveform, from 0-200 nits or so, with only small excursions for specular highlights. So what ends up happening is that even if we set, say, 1000 nits as our peak brightness, we might actually only see occasional peaks at 400 or 600 nits and nothing greater than that, depending of course on the project and the subject matter.
The reason for this is that the average picture level (APL) of SDR and HDR should be similar (in fact, HDR not infrequently actually ends up being lower), and generally speaking, everything above 203 nits (diffuse white) is for specular highlights. So while our signal may very well appear identical when switching between 10-bit, 12-bit and HDR PQ in the waveform settings on the Color Page, if instead we were to switch the project settings themselves from HDR to SDR on the Color Management Page, we’d notice that our waveforms look quite different indeed – stretching out to fill the 10-bit scope while shrinking back down to below 100-200 nits in the HDR PQ one.
The Show Reference Levels checkbox lets you enable adjustable Low and High reference level markers by setting the Low and High sliders to something other than their defaults. These reference markers are especially useful for HDR grading where you’re working within a specific peak luminance threshold, such as when targeting 203 nits for diffuse white.
HDR Reference White
“One key new feature of HDR is that it can allow for scene to scene overall luminance changes, so that daylight scenes can feel substantially different from indoor scenes, and night scenes. Thus, constraining a system to set the diffuse white point to a constant luminance defeats some of the advantages of HDR.”PUPILLOMETRY OF HDR VIDEO VIEWING
One of the most popular posts on our blog is about HDR reference white, which has been standardized as 203 nits. But in reality, there is no such thing as reference white, any more than there is a fixed value for 18% gray or fair skin. Diffuse white can be 145 nits indoors or as much as 400 nits outdoors, skin tones can be + 1.7 stops brighter outdoors than indoors, and the same goes for 18% gray! While typical white levels presently used in PQ production range anywhere from 145 – 250 nits, it is recommended that HDR Reference White (diffuse white) be 203 nits or 58% of the full PQ signal (input) level. Leaving little headroom (i.e. a much higher value) means brighter diffuse whites at the expense of flatter looking specular highlights, whereas leaving more headroom allows for better looking highlights. It should be noted that 203 nits is only the recommendation for 1,000 nit peak brightness displays: that figure gets progressively higher for displays brighter than 1,000 nits. If however, you’ve got graphics at 203 nits over a dark image, they may overpower the scene, whereas if the scene is very bright, the graphics may be difficult to see; larger areas of diffuse white may also appear brighter than a small area – which is why 203 nits is only a recommendation and not carved in stone!
Excellent Info on the Nits Level for the Diffused White & 18% Grey patch of the Color Checker..
I was always confused about them..
Because when shooting in SDR .. I used to expose the image..until the Diffused White Patch on Color Checker started to show Zebra Marks ( Zebra set to 94 on my Sony A6300)..
But in HDR Grading I was always confused what the Nit Value for that same Diffused White Patch in Color Checker should be..
I was always confused where to keep my Diffused White Patch in the Waveform.. 100nit.. 200nit.. etc..
Your post exactly answers that question !
Thanks you so much !!!!
Purchasing a new camera soon..
What would you suggest – Sony A7IV or Sony A7S3…
The confusion for me is..I do both Photo and Video work.. and on paper the A7IV seems a good hybrid..
But its Crop in 4K60 mode.. very bad rolling shutter has now created some doubts in my mind..
A7S3 has really good Rolling Shutter performance.. but at 12MP.. it’s a bit on low end for Photos..
But 4K60 and 4K120 with no Crop makes it so much appealing…
With Sony cameras, you want to ETTR S-Log3 by as much as +1.66 stops, so setting zebras to 94, provided you’re able to recover those highlights in post, is in the ballpark. You’ll then park your diffuse whites in the neighborhood of 203 nits on the waveform in post, but as is explained in the blog post, that number is not carved in stone. I’m sorry, but I can’t give advice on a camera for stills, as I only do video.
Thank you so much Jon for your promt reply and the work your are doing in world of HDR !
HDR workflow, Hardware, Camera Equipment, Dynamic Range, Codecs, RAW, Grading Software, Monitors etc.etc. there are so many variables to it.
But I am extremely thankful that we have blogs such as yours that are demistifying this mountain of HDR..
With extremely detailed yet simple to understand information.
Really appreciate your work.
Again thanks. Keep up the good work Sir.
Last question – ( I know you have already mentioned in one of your blog post earlier – regarding sub 6000$ Camera for HDR and the minimum things they should have.. 13 Stops of DR, RAW or 10bit Log minimum )
As I had mentioned above.. I would love to buy A7S3 for HDR work.. but still surfing the market if there any Hybrid Camera out there..
Recently with the launch of Canon R5C it seemed a good proposition…
But not sure if it truly has 13Stops of Dynamic Range…
Did you get a chance to explore Canon R5C ?
Other option would be to go for Sony A1.. in CineD Labtest it demonstrates ~12.4 ~12.7 Stops of Dynamic Range..
And has good Photo capabilities as well..
But then going for A1 would be almost 1.6x more expensive then A7S3 or Canon R5C here in India..
Also Canon RF lenses in India are almost 2x more expensive compared to their Sigma or Sony E-mount counterparts…..
So thats a bummer..
Thanks for the kind words, Apoorv. As I wrote in one of my blog posts, several cameras under $6,000 with the required dynamic range include the Ursa 12K, the EOS C70 and the Red Komodo. I believe you’ve got to shoot 12K to get the best dynamic range out of the Ursa, however, and I don’t care for the form factor. Canon surprised everyone when they added RAW to the EOS C70 at up to 60p that records to ordinary V90 SD cards, so it is high on my list. The Komodo is a phenomenal camera but its true cost will be much higher than $6,000 after purchasing accessories and approved media. I think the body only sells for close to $8,000 here in Vietnam. The Sony a1 8K is 4:2:0 and as far as I know, it isn’t able to shoot 8K RAW, (I read somewhere that it shoots line skipped 4.3K ProRes Raw, but can’t confirm that) nor does it have a 4K HQ mode like the R5, so it is less interesting to me. I’ve been looking at reviews of the Canon EOS R5 C recently and it does appear to have decidedly more dynamic range and less noise than the R5, so it might very well be a contender – though the C70 still seems to be the better choice for those who plan to shoot strictly video. And yes, RF lenses are expensive!
Thanks Jon for the feedback.
I’ll explore Canon C70 ( maybe with a speedboster, one can also get closer to Full frame equivalent, Also built in ND is a plus ! )
I may download some sample footage and try to do some HDR this evening. 😊
I read your articles but i’m not able to get the same result from the timeline and quicktime and youtube. https://workspace.picter.com/v/gcqR7ozn
I tried several set up, i followed the youtubes requirements upload, i did in h265 and prores 422, i’m editing with macbook pro m1 max with integrated hdr p3 1600 nits display and profile, davinci studio and file in Slog3Sgamut.cine from a7siii, very classic set up.
I experted with hdr10+ metadata, without, and 2084 rec 2020 and rec 2020 limited to p3, many time but the result on quicktime or youtube are alwais with low contrast, i do not understand why:( Can you help me to solve this problem? i can give you a big tip if you help me to solve this problem, thanks very much:)
ML, Davide Marconcini
Check your email.