Part I: Project settings
Part II: BT.2020 or P3-D65 Limited?
Part V: RAW
Part VI: The Grade
Part VIII: Why HDR Production Monitors Matter
Stuff we’re doing differently now from in the past:
- During shooting, avoiding yellow in the false color guide of the Ninja V entirely except for specular highlights (aside from the infrequent occasions when we purposely want to blow out highlights)
- Using the simplest tools for the job
- No longer using the X-Rite ColorChecker in post
- Shooting S-Gamut3.cine rather than S-Gamut3
- Scrupulously applying a curve to each and every project
- Focusing more on color separation and keeping tonal contrast under control
- Labeling each and every node
- Split-toning by un-ganging the custom curve and pushing cyan into the shadows and reds into the highlights
- Applying masking tools photographically rather than technically (for example, rather than stupidly slapping on a vignette or power window to draw attention to the subject or their face, using the tools in Resolve to augment the lighting that’s already there)
- Exporting projects using Prores 4444 XQ rather than HEVC
Following these precepts, the extent of our adjustments has actually decreased with the result that we’re less likely to push the image to the breaking point; selecting the simplest tool for the job, for example, Offset rather than the eyedropper tool and Hue vs. Hue, reduces the chances of introducing unwanted artifacts; lower contrast means not having to rely so much on power windows all over the place; split-toning and nudging yellows and reds toward orange and green toward teal adds color separation and applying grain and blur minimizes the objectionable video look. Split-toning is a characteristic of traditional photochemical print stock responsible for much of its charm and one of the traits that makes celluloid so attractive to filmmakers. No longer are we using the X-Rite ColorChecker (apart from white balancing the shot before recording) because printed color charts are only accurate for HD colorimetry and are absolutely useless for HDR WCG.
Natural daylight coming in from window, no fill light or reflector used. Exposure was very hot. Photo is of Ninja V display.
Ninja V false color. Red and bright orange are clipping, yellow represents specular highlights.
You might want to create a new node at the beginning of your node tree and knock down saturation of highlights and shadows a smidgen. Authorities insist shadows must be neutralized, but you should decide for yourself what works best.
The extended brightness of HDR is not to increase the overall light level of the entire picture but to allow headroom for specular highlights. The APL (average picture level) of HDR movies is usually the same or even often less than SDR content, which is why we don’t need to jump for the remote to adjust brightness when switching between SDR and HDR content on Netflix or YouTube. A picture that is too bright will cause eye strain in the viewer, especially since HDR content is intended to be watched in a dark viewing environment. Making the entire image brighter would be perceived as being of poor quality and would also risk triggering the ABL (auto brightness limiter) feature on OLED displays whose average picture level is <20% of peak brightness. Unlike SDR, PQ HDR is an absolute standard, meaning that it’s not possible to raise brightness to accommodate ambient lighting. In addition to the technical limitations, there is also the aesthetic element – as you raise the APL, there is less and less headroom for specular highlights and they’re no longer impactful.This is why you’ll often see diffuse whites in dramas lower than the recommended 203 nits. Exceptions to the rule are when filmmakers use the extended brightness of HDR when a character goes from a dimly lit interior to the sunny outdoors, a technique that is not used nearly often enough; or to intentionally create discomfort in the viewer (e.g. flashing strobe lights).
You can witness for just how much of a difference leaving headroom for specular highlights makes – just gradually increase the brightness of any one of your HDR videos and you’ll instantly see the impact of those specular highlights vanish before your very eyes.
As a rule of thumb, the more contrasty the image, the less saturation you’ll need in the image overall. Another general principal is that highly saturated colors should not be too bright.
While the extended color volume of HDR results in perceptibly more vibrant colors, it pays to be cautious with saturation. Glowing skin and radioactive foliage are indications that saturation is cranked up too high. To keep an eye on saturation, use the vectorscope. In order to see the highlight and shadow excursions, enable the “extents” option of your scopes. Extents create an outline highlighting all graph excursions to show you the true level of all overshoots and undershoots in the video signal.
If you’ve got good color separation, the signal mass in the vectorscope should be straddling at least two quadrants while preserving skin tones. Instead of using the saturation knob to increase saturation, how about trying out this method instead:
The overwhelming majority of online tutorials recommend making an S-curve, but they fail to make the crucial distinction between footage that has been underexposed, exposed for middle gray, or overexposed. Properly exposed S-Log3 should be exposed to the right by around 1.66 stops – and the correct way to shape the curve is to grab it somewhere around the mid-point and gently drag it downward like so.
On the other hand, if the footage has been underexposed, you’ll be grabbing the curve toward the left side and gradually lifting upward to form a curve in the opposite direction of the overexposed clip above.
Balancing shots/correcting skin tones
We’ve squandered years watching useless YouTube tutorials on balancing shots and correcting skin tones, futzing around with the color wheels, power masks and hue versus hue, but it’s always taken forever, we were always guessing and we were never completely satisfied with the results. So here’s what we’re doing now – it’s completely predictable, repeatable and the results are pretty much instantaneous. We go into Offset and have a look at the vectorscope (it’s nearly always leaning toward green); then, we nudge the green channel over until the vectorscope is close to sitting directly on the skin tone line. An even faster method is to activate the Hotkeys menu in the Color pull-down menu and balancing with a numeric keypad, if you’ve got one. There’s no quicker way to balance a shot than this.
Creating color separation with split-toning
Split-toning is a characteristic of film stocks where you’ll see cool shadows and warm highlights adding color contrast to the image. This is accomplished by un-ganging the channels and adding a few pixels of blue and green to the shadows, a tiny bit of red and green to the highlights. Once you’ve made your adjustments to the curve, you can finesse them with the sliders to the right. Just remember, 50 represents zero – for example, moving the red slider to the left of 50 begins to add cyan.
It may seem trivial, but split-toning is the single most important ingredient toward creating a look; and the image will be perceived as more colorful and less video-ish than dialing in more saturation. The look can be further refined by nudging yellow a bit toward orange, green toward teal-green and red toward orange in Hue vs. Hue; adjusting the saturation of each color in the Hue vs. Sat curves; and reducing saturation in the upper midtones and highlights in the Sat vs. Sat curves.
When mastering in PQ (ST 2084), much of the signal range is devoted to shadow detail. Noise in darker image regions is visually masked by highlights in the image. You can witness this for yourself by covering the highlights with one hand while looking at the shadow areas of your video displayed on the monitor. YouTube’s processing removes some noise to achieve streaming bitrates. In order to exercise more control over the final image, you may want to denoise your video prior to rendering it for upload. Many YouTube tutorials improperly recommend enlarging the image 999%, indiscriminately blasting luma and chroma noise with heavy amounts of noise reduction, followed by tossing in hideous amounts of sharpening, destroying true detail and making the picture look like cheap camcorder footage. We suggest instead using noise reduction sparingly and adding as little sharpening as possible while being on the lookout for undesirable artifacts.
To check noise reduction in DaVinci Resolve Studio, use the highlighter in A/B mode. It may take a few moments to kick in, depending on your machine. If you start to see outlines of the subject, actual detail in the image is being affected and noise reduction should be reduced. Additionally, we suggest examining the pores of the talent’s skin for excessive smoothing, banding in large areas of uniform color with fine gradients, like walls and skies, as well as being on the lookout for jaggies or sawtooth effect, an objectionable artifact that makes the smooth outlines of objects resemble a series of staircases. In general, you’ll want to preserve some noise to prevent banding.
Rather than applying noise reduction to the entire clip, you might consider just hitting the darker regions of the image where noise is most bothersome:
Grain adds texture to an otherwise squeaky clean, sterile digital image and, as HDR is rather unforgiving, is all but indispensible for hiding imperfections in complexions, makeup, graphics, visual effects and prosthetics. Adding a moderate amount of grain can help hide banding when uploading to video sharing platforms. Another seldom discussed aspect of grain is that it’s in constant motion, breathing life into each and every frame. At the same time, the aggressive compression algorithms of video sharing platforms like YouTube destroy high frequency detail, turning the voluptuous grain seen in the grading suite into unsightly macroblocking, so you’ll have to decide for yourself whether it might be preferable to not add grain to projects at all. If you are using a LUT, like Cullen Kelly’s Kodak 2383 PFE LUT for example, be sure to add your grain before, not after, the LUT.
Download a comparison between Dehancer 5.3 film grain and DaVinci Resolve grain here. It’s well-nigh impossible to see how the plug-in compares to Resolve on something like the MacBook Pro Liquid Retina XDR mini-LED, which is why we recommend throwing the clip on the timeline of your favorite NLE, setting it up for HDR and viewing on an external UHD monitor or television set.
During an appearance on Cullen Kelly’s Grade School, the brilliant colorist Jill Bogdanowicz revealed a secret to accentuating texture without it looking over-processed. While working on Joker, the colorist used Live Grain – which separates out the red, green and blue channels, creating grain that resembles scanned film – to accentuate texture in the cooler, darker backgrounds while de-emphasizing grain in the warmer, red tones of the talent’s skin. One way to accomplish this in DaVinci Resolve is to create a layer mixer beneath the grain node, open up the HSL Qualifier, switch off luma and saturation, and, using the highlighter tool to see the effect in the viewer, adjust hue to isolate the skin tones. Afterward, apply clean white, clean black and blur radius to tidy things up. Since we don’t want the skin to be completely free of grain, we add a keyer to the layer node to restore some texture to the talent’s skin. Click here to download an example (HDR) of this powerful technique.
An even easier method to de-emphasize grain on the subject is to use the new Depth Map in DaVinci Resolve Studio 18. Once you’ve isolated the subject, use the softness slider to add a bit of grain back into the talent’s skin. To see the improvement, click here to download sample footage.
Readers might also be interested in investigating some of the print film emulation plugins that are becoming more widely available.
How to modify LUTs
Cullen Kelly’s Kodak 2383 PFE LUT works its magic no matter what footage you’re working with, however, the look is pretty bold, particularly the contrast, so you might want to scale it back a smidgen. Fortunately for us, Cullen released a video demonstrating how to finesse color and contrast independently of each other so you can achieve precisely the look you’re after.
Concerning an often asked question regarding the appearance of scopes in HDR10, one significant difference is that in SDR, the signal can ordinarily fill out the scopes, say, from 0-1023, while in HDR PQ, the bulk of the signal will usually be bunched up toward the bottom end of the waveform, from 0-200 nits or so, with only small excursions for specular highlights. So what ends up happening is that even if we set, say, 1000 nits as our peak brightness, we might actually only see occasional peaks at 400 or 600 nits and nothing greater than that, depending of course on the project and the subject matter.
The reason for this is that the average picture level (APL) of SDR and HDR should be similar (in fact, HDR not infrequently actually ends up being lower), and generally speaking, everything above 203 nits (diffuse white) is for specular highlights. So while our signal may very well appear identical when switching between 10-bit, 12-bit and HDR PQ in the waveform settings on the Color Page, if instead we were to switch the project settings themselves from HDR to SDR on the Color Management Page, we’d notice that our waveforms look quite different indeed – stretching out to fill the 10-bit scope while shrinking back down to below 100-200 nits in the HDR PQ one.
The Show Reference Levels checkbox lets you enable adjustable Low and High reference level markers by setting the Low and High sliders to something other than their defaults. These reference markers are especially useful for HDR grading where you’re working within a specific peak luminance threshold, such as when targeting 203 nits for diffuse white.
HDR Reference White
While typical white levels presently used in PQ production range anywhere from 145 – 250 nits, it is recommended that HDR Reference White (diffuse white) be 203 nits or 58% of the full PQ signal (input) level. Leaving little headroom (i.e. a much higher value) means brighter diffuse whites at the expense of flatter looking specular highlights, whereas leaving more headroom allows for better looking highlights. It should be noted that 203 nits is only the recommendation for 1,000 nit peak brightness displays: that figure gets progressively higher for displays brighter than 1,000 nits. If however, you’ve got graphics at 203 nits over a dark image, they may overpower the scene, whereas if the scene is very bright, the graphics may be difficult to see; larger areas of diffuse white may also appear brighter than a small area – which is why 203 nits is only a recommendation and not carved in stone!