Monster Guide: HDR10 in Resolve Studio 18 (Part VI)

Part I: Project settings

Part II: BT.2020 or P3-D65 Limited?

Part III: HDR Palette, project render settings, generate MaxCLL and MaxFall HDR10 metadata

Part IV: HDR to SDR Conversion LUT for YouTube

Part V: RAW

Part VI: The Grade

Part VII: The State of HDR Film Emulation LUTs & Plugins

Part VIII: Why HDR Production Monitors Matter

Stuff we’re doing differently now from in the past:

  • During shooting, avoiding yellow in the false color guide of the Ninja V entirely except for specular highlights (aside from the infrequent occasions when we purposely want to blow out highlights)
  • Using the simplest tools for the job
  • No longer using the X-Rite ColorChecker in post
  • Shooting S-Gamut3.cine rather than S-Gamut3
  • Scrupulously applying a curve to each and every project
  • Focusing more on color separation and keeping tonal contrast under control
  • Labeling each and every node
  • Using video levels rather than full data levels for external monitoring
  • Split-toning by un-ganging the custom curve and pushing cyan into the shadows and reds into the highlights
  • Applying masking tools photographically rather than technically (for example, rather than stupidly slapping on a vignette or power window to draw attention to the subject or their face, using the tools in Resolve to augment the lighting that’s already there)
  • Exporting projects using Prores 4444 XQ rather than HEVC

Following these precepts, the extent of our adjustments has actually decreased with the result that we’re less likely to push the image to the breaking point; selecting the simplest tool for the job, for example, the global wheel rather than the eyedropper tool and Hue vs. Hue, reduces the chances of introducing unwanted artifacts; lower contrast means not having to rely so much on power windows all over the place; split-toning and nudging yellows and reds toward orange and green toward teal adds color separation and applying grain and blur minimizes the objectionable video look. Split-toning is a characteristic of traditional photochemical print stock responsible for much of its charm and one of the traits that makes celluloid so attractive to filmmakers. Although the overwhelming majority of online HDR tutorials (including those by Blackmagic and Dolby Vision certified trainers) insist on full data levels, video levels turns out to be the correct choice most of the time – and it also happens to match the output of Final Cut Pro, which is important to us, as we don’t work exclusively in DaVinci Resolve. No longer are we using the X-Rite ColorChecker (aside from white balancing the shot before recording) because printed color charts are only accurate for HD colorimetry and are absolutely useless for HDR WCG. We’d also like to dispense once and for all with the horror associated with the word clipping among authorities: when it’s a conscious, creative decision, occasional clipping of highlights can be exceedingly powerful.

Natural daylight coming in from window, no fill light or reflector used. Exposure was very hot. Photo is of Ninja V display.

Ninja V false color. Red and bright orange are clipping, yellow represents specular highlights.

If diffuse white is 203 nits, +1.66 stops ETTR would fall somewhere around 800 nits. We’ve exceeded that by quite a bit. Will we be able to recover the detail in post? Or to put it another way, does the viewer need to see detail in the brightest highlights of this particular frame?

In order to prolong the lifespan of your OLED display, we recommend cropping out intense light sources on the timeline of your NLE whenever possible. If they remain on the screen for too long, they can result in image retention or burn-in. At the same time, prolonged display of an image smaller than the television may also shorten the life of the unit.

You might want to create a new node at the beginning of your node tree and knock down saturation of highlights and shadows a smidgen. Authorities insist shadows must be neutralized, but you should decide for yourself what works best.

As a rule of thumb, the more contrasty the image, the less saturation you’ll need in the image overall. Another general principal is that highly saturated colors should not be too bright.

While the extended color volume of HDR results in perceptibly more vibrant colors, it pays to be cautious with saturation. Glowing skin and radioactive foliage are indications that saturation is cranked up too high. To keep an eye on saturation, use the vectorscope. In order to see the highlight and shadow excursions, enable the “extents” option of your scopes. Extents create an outline highlighting all graph excursions to show you the true level of all overshoots and undershoots in the video signal.

Custom curves

The overwhelming majority of online tutorials recommend making an S-curve, but they fail to make the crucial distinction between footage that has been underexposed, exposed for middle gray, or overexposed. Properly exposed S-Log3 should be exposed to the right by around 1.66 stops – and the correct way to shape the curve is to grab it somewhere around the mid-point and gently drag it downward like so.

On the other hand, if the footage has been underexposed, you’ll be grabbing the curve toward the left side and gradually lifting upward to form a curve in the opposite direction of the overexposed clip above.

Skin Tones: 5 Tips

  1. Don’t key faces.
  2. Don’t try to correct skin tones using the tint and temperature sliders in the RAW controls.
  3. Grab a still before color correction and use the image wipe in the viewer to compare before/after as you make adjustments.
  4. Use Offset or the Global wheel in the HDR palette for correcting skin tones.
  5. Upload to YouTube using ProRes 4444 XQ, not HEVC.

Creating color separation with split-toning

Split-toning is a characteristic of film stocks where you’ll see cool shadows and warm highlights adding color contrast to the image. This is accomplished by un-ganging the channels and adding a few pixels of blue and green to the shadows, a tiny bit of red and green to the highlights. Here, we’ve gone a little overboard with the green! Once you’ve made your adjustments to the curve, you can finesse them with the sliders to the right. Just remember, 50 represents zero – for example, moving the red slider to the left of 50 begins to add cyan.

It may seem trivial, but split-toning is the single most important ingredient toward creating a look; and the image will be perceived as more colorful and less video-ish than dialing in more saturation. The look can be further refined by nudging yellow a bit toward orange, green toward teal-green and red toward orange in Hue vs. Hue; adjusting the saturation of each color in the Hue vs. Sat curves; and reducing saturation in the upper midtones and highlights in the Sat vs. Sat curves.

Noise Reduction

When mastering in PQ (ST 2084), much of the signal range is devoted to shadow detail. Noise in darker image regions is visually masked by highlights in the image. You can witness this for yourself by covering the highlights with one hand while looking at the shadow areas of your video displayed on the monitor. YouTube’s processing removes some noise to achieve streaming bitrates. In order to exercise more control over the final image, you’ll want to denoise your video prior to rendering it for upload. Many YouTube tutorials improperly recommend enlarging the image 999%, indiscriminately blasting luma and chroma noise with heavy amounts of noise reduction, followed by tossing in hideous amounts of sharpening, destroying true detail and making the picture look like cheap camcorder footage. We suggest instead using noise reduction sparingly and adding as little sharpening as possible while being on the lookout for undesirable artifacts. 

To check noise reduction in DaVinci Resolve Studio, use the highlighter in A/B mode. It may take a few moments to kick in, depending on your machine. If you start to see outlines of the subject, actual detail in the image is being affected and noise reduction should be reduced. Additionally, we suggest examining the pores of the talent’s skin for excessive smoothing, banding in areas of smooth tonal transitions, like walls, as well as being on the lookout for jaggies, an objectionable artifact that makes the smooth outlines of objects resemble a series of staircases.

Film Grain

Grain adds texture to an otherwise squeaky clean, sterile digital image and, as HDR is rather unforgiving, is all but indispensible for hiding imperfections in complexions, makeup, graphics, visual effects and prosthetics. Another seldom discussed aspect of grain is that it’s in constant motion, breathing life into each and every frame. At the same time, the aggressive compression algorithms of video sharing platforms like YouTube destroy high frequency detail, turning the voluptuous grain seen in the grading suite into unsightly macroblocking, so you’ll have to decide for yourself whether it might be preferable to not add grain to projects at all.

Download a comparison between Dehancer 5.3 film grain and DaVinci Resolve grain here. It’s well-nigh impossible to see how the plug-in compares to Resolve on something like the MacBook Pro Liquid Retina XDR mini-LED, which is why we recommend throwing the clip on the timeline of your favorite NLE, setting it up for HDR and viewing on an external UHD monitor or television set.

Creating textural depth. Before (L) After (R)

Textural depth

During an appearance on Cullen Kelly’s Grade School, the brilliant colorist Jill Bogdanowicz revealed a secret to accenting texture without it looking over-processed. While working on Joker, the colorist used Live Grain – which separates out the red, green and blue channels, creating grain that resembles scanned film – to accentuate texture in the cooler, darker backgrounds while de-emphasizing grain in the warmer, red tones of the talent’s skin. One way to accomplish this in DaVinci Resolve is to create a layer mixer beneath the grain node, open up the HSL Qualifier, switch off luma and saturation, and, using the highlighter tool to see the effect in the viewer, adjust hue to isolate the skin tones. Afterward, apply clean white, clean black and blur radius to tidy things up. Since we don’t want the skin to be completely free of grain, we add a keyer to the layer node to restore some texture to the talent’s skin. Click here to download an example (HDR) of this powerful technique.

Before/after applying Depth Mask to de-emphasize grain on the subject

An even easier method to de-emphasize grain on the subject is to use the new Depth Map in DaVinci Resolve Studio 18. Once you’ve isolated the subject, use the softness slider to add a bit of grain back into the talent’s skin. To see the improvement, click here to download sample footage.

Readers might also be interested in investigating some of the film print emulation plugins that are becoming more widely available.

Here’s another example of split-toning to add color separation. A link to the downloadable file with film grain added can be found in the description.

The Scopes

Concerning an often asked question regarding the appearance of scopes in HDR10, one significant difference is that in SDR, the signal can ordinarily fill out the scopes, say, from 0-1023, while in HDR PQ, the bulk of the signal will usually be bunched up toward the bottom end of the waveform, from 0-200 nits or so, with only small excursions for specular highlights. So what ends up happening is that even if we set, say, 1000 nits as our peak brightness, we might actually only see occasional peaks at 400 or 600 nits and nothing greater than that, depending of course on the project and the subject matter.

The reason for this is that the average picture level (APL) of SDR and HDR should be similar (in fact, HDR not infrequently actually ends up being lower), and generally speaking, everything above 203 nits (diffuse white) is for specular highlights. So while our signal may very well appear identical when switching between 10-bit, 12-bit and HDR PQ in the waveform settings on the Color Page, if instead we were to switch the project settings themselves from HDR to SDR on the Color Management Page, we’d notice that our waveforms look quite different indeed – stretching out to fill the 10-bit scope while shrinking back down to below 100-200 nits in the HDR PQ one.

The Show Reference Levels checkbox lets you enable adjustable Low and High reference level markers by setting the Low and High sliders to something other than their defaults. These reference markers are especially useful for HDR grading where you’re working within a specific peak luminance threshold, such as when targeting 203 nits for diffuse white.

HDR Reference White

While typical white levels presently used in PQ production range anywhere from 145 – 250 nits, it is recommended that HDR Reference White (diffuse white) be 203 nits or 58% of the full PQ signal (input) level. It should be noted that 203 nits is only the recommendation for 1,000 nit peak brightness displays: that figure gets progressively higher for displays brighter than 1,000 nits. If however, you’ve got graphics at 203 nits over a dark image, they may overpower the scene, whereas if the scene is very bright, the graphics may be difficult to see; larger areas of diffuse white may also appear brighter than a small area – which is why 203 nits is only a recommendation and not carved in stone!

13 thoughts on “Monster Guide: HDR10 in Resolve Studio 18 (Part VI)

Add yours

  1. Excellent Info on the Nits Level for the Diffused White & 18% Grey patch of the Color Checker..

    I was always confused about them..

    Because when shooting in SDR .. I used to expose the image..until the Diffused White Patch on Color Checker started to show Zebra Marks ( Zebra set to 94 on my Sony A6300)..

    But in HDR Grading I was always confused what the Nit Value for that same Diffused White Patch in Color Checker should be..
    I was always confused where to keep my Diffused White Patch in the Waveform.. 100nit.. 200nit.. etc..

    Your post exactly answers that question !

    Thanks you so much !!!!

    Also,
    Purchasing a new camera soon..

    What would you suggest – Sony A7IV or Sony A7S3…

    The confusion for me is..I do both Photo and Video work.. and on paper the A7IV seems a good hybrid..

    But its Crop in 4K60 mode.. very bad rolling shutter has now created some doubts in my mind..

    A7S3 has really good Rolling Shutter performance.. but at 12MP.. it’s a bit on low end for Photos..
    But 4K60 and 4K120 with no Crop makes it so much appealing…

    1. With Sony cameras, you want to ETTR S-Log3 by as much as +1.66 stops, so setting zebras to 94, provided you’re able to recover those highlights in post, is in the ballpark. You’ll then park your diffuse whites in the neighborhood of 203 nits on the waveform in post, but as is explained in the blog post, that number is not carved in stone. I’m sorry, but I can’t give advice on a camera for stills, as I only do video.

      1. Thank you so much Jon for your promt reply and the work your are doing in world of HDR !

        HDR workflow, Hardware, Camera Equipment, Dynamic Range, Codecs, RAW, Grading Software, Monitors etc.etc. there are so many variables to it.

        But I am extremely thankful that we have blogs such as yours that are demistifying this mountain of HDR..
        With extremely detailed yet simple to understand information.

        Really appreciate your work.

        Again thanks. Keep up the good work Sir.

        Last question – ( I know you have already mentioned in one of your blog post earlier – regarding sub 6000$ Camera for HDR and the minimum things they should have.. 13 Stops of DR, RAW or 10bit Log minimum )

        As I had mentioned above.. I would love to buy A7S3 for HDR work.. but still surfing the market if there any Hybrid Camera out there..

        Recently with the launch of Canon R5C it seemed a good proposition…
        But not sure if it truly has 13Stops of Dynamic Range…

        Did you get a chance to explore Canon R5C ?

        Other option would be to go for Sony A1.. in CineD Labtest it demonstrates ~12.4 ~12.7 Stops of Dynamic Range..

        And has good Photo capabilities as well..

        But then going for A1 would be almost 1.6x more expensive then A7S3 or Canon R5C here in India..

        Also Canon RF lenses in India are almost 2x more expensive compared to their Sigma or Sony E-mount counterparts…..
        So thats a bummer..

      2. Thanks for the kind words, Apoorv. As I wrote in one of my blog posts, several cameras under $6,000 with the required dynamic range include the Ursa 12K, the EOS C70 and the Red Komodo. I believe you’ve got to shoot 12K to get the best dynamic range out of the Ursa, however, and I don’t care for the form factor. Canon surprised everyone when they added RAW to the EOS C70 at up to 60p that records to ordinary V90 SD cards, so it is high on my list. The Komodo is a phenomenal camera but its true cost will be much higher than $6,000 after purchasing accessories and approved media. I think the body only sells for close to $8,000 here in Vietnam. The Sony a1 8K is 4:2:0 and as far as I know, it isn’t able to shoot 8K RAW, (I read somewhere that it shoots line skipped 4.3K ProRes Raw, but can’t confirm that) nor does it have a 4K HQ mode like the R5, so it is less interesting to me. I’ve been looking at reviews of the Canon EOS R5 C recently and it does appear to have decidedly more dynamic range and less noise than the R5, so it might very well be a contender – though the C70 still seems to be the better choice for those who plan to shoot strictly video. And yes, RF lenses are expensive!

  2. Thanks Jon for the feedback.

    I’ll explore Canon C70 ( maybe with a speedboster, one can also get closer to Full frame equivalent, Also built in ND is a plus ! )

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑