Monster Guide: HDR10 in Resolve Studio 18 (Part V)

Part I: Project settings

Part II: BT.2020 or P3-D65 Limited?

Part III: HDR Palette, project render settings, generate MaxCLL and MaxFall HDR10 metadata

Part IV: HDR to SDR Conversion LUT for YouTube

Part V: RAW

Part VI: The Grade

Part VII: The State of HDR Film Emulation LUTs & Plugins

Part VIII: Why HDR Production Monitors Matter

Avoid Y’CbCr
For HDR projects, steer clear of recording XAVC S-I 4:2:2 internal if at all possible. Art Adams explains:

“The Y’CbCr encoding model is popular because it conceals subsampling artifacts vastly better than does RGB encoding. Sadly, while Y’CbCr works well in Rec 709, it doesn’t work very well for HDR. Because the Y’CbCr values are created from RGB values that have been gamma corrected, the luma and chroma values are not perfectly separate: subsampling causes minor shifts in both. This isn’t noticeable in Rec 709’s smaller color gamut, but it matters quite a lot in a large color gamut. Every process for scaling a wide color gamut image to fit into a smaller color gamut utilizes desaturation, and it’s not possible to desaturate Y’CbCr footage to that extent without seeing unwanted hue shifts. My recommendation: always use RGB 4:4:4 codecs or capture raw when shooting for HDR, and avoid Y’CbCr 4:2:2 codecs. If a codec doesn’t specify that it is “4:4:4” then it uses Y’CbCr encoding, and should be avoided.”

This is the consensus of industry professionals, including post production houses, who prefer to see RAW for HDR projects. This is also why streaming platforms like Netflix use the ICtCp color space. For those wanting to future-proof their material or who might need to create a Dolby Vision master in the future:

“Dolby Laboratories recommends using certain types of source material for creating high quality Dolby Vision content. In general, the best source material for Dolby Vision best retains the native color gamut and original dynamic range of the content at the time of its origination.

Original camera footage

For mastering first run movies in Dolby Vision, the original camera raw files or original film scans are ideal.

  • Film: Original Camera Negative (OCN)
  • Digital: Camera RAW”

Shoot 12-bit or higher
Matthew Bilodeau provides further grounds for preferring 12-bit RAW to 10-bit Log:

“… if you’re considering which codecs to use as intermediates for HDR work, especially if you’re planning on an SDR down-grade from these intermediates, 12 bits per channel as a minimum is important.  I don’t want to get sidetracked into the math behind it, but just a straight cross conversion from PQ HDR into SDR loses about ½ bit of precision in data scaling, and another ¼ – ½ bit precision in redistributing the values to the gamma 2.4 curve, leaving a little more 1 bit of precision available for readjusting the contrast curve (these are not uniform values).  So, to end up with an error-free 10 bit master (say, for UHD broadcast) you need to encode 12 bits of precision into your HDR intermediate.”

“Converting one 10-bit format into another 10-bit format will reduce the performance of the signal and should be avoided whenever possible.” – HDR: A Guide to High Dynamic Range Operation for Live Broadcast Applications, Grass Valley

Linear raw excels at capturing highlight information and what you will find is that compared to log there will be more textures in highlights and brighter parts of your captured scenes. This will become more and more important as HDR screens are better able to show highlights correctly. Current standard dynamic range displays don’t show highlights well, so often the extra highlight data in raw is of little benefit over log. But that’s going to change in the next few years so linear recording with it’s extra highlight information will become more and more important.

Generation Media

RAW is future-proof
Yet another seldom discussed advantage of RAW is that, when shooting log, the quality of the file is completely at the mercy of the camera’s internal encoder, meaning that it’s not possible to improve upon this or change the gamma/color space at some point in the future; whereas with RAW, the image quality is determined by the debayering process in post, giving you the ability to re-encode the footage with a higher quality encoder when one becomes available. 

In our own experience, and from watching countless videos uploaded by others, RAW will give significantly better results and there’s no way we’d go back to shooting Log for HDR. Not to mention that a7s III 4.2K ProRes RAW files contain nearly 25% more resolution, with no baked-in denoising, lens corrections, in-camera sharpening or other enhancements applied.

Click on image below to enlarge.

Noise

HDR tends to amplify existing noise in the image, particularly in shadow areas. De-noising in post with either Resolve’s own built-in denoising software or Neat Video will give better results.

And here’s the same ProRes 4444 clip with a small amount of spatial noise reduction in addition to the temporal. Mode faster – radius small – Luma 2 – Chroma 2.

There’s also more color information to work with in the RAW clip than in the XAVC S-I internal clip:

From best to worst

“If you’re using a prosumer or entry level professional camera, taking a few preparatory steps to set up how you’ll actually be capturing the image can mean the difference between getting footage that can be used in HDR mastering, vs. footage that can’t.

  1. 16 bpc > 14 bpc > 12 bpc > 10 bpc > 8 bpc: 10 bpc should be your minimum spec.
  2. RAW > LOG > Linear > Gamma 2.4: avoid baking in your gamma at all costs!
  3. Camera Native / BT.2020 > DCI-P3 > Rec. 709 color primaries
  4. RAW > Compressed RAW > Intraframe compression (ProRes, DNxHR, AVC/HEVC Intra) > Interframe compressed (AVC/H.264, HEVC).” – Samuel Bilodeau, Mystery Box

Flexibility in post
Mark Spencer demonstrates how, with the same extreme grade applied, the Sony internal XAVC file falls apart much more easily than the ProRes RAW one:


Lift, gamma, gain
It should be mentioned here that traditional lift/gamma/gain wheels are unsuitable for grading HDR, just one reason why DaVinci Resolve, with its HDR Color Palette, is preferable to Final Cut Pro.

“It is also worth bearing in mind that the traditional lift, gamma, and gain operators are particularly unsuitable for acting on ST 2084 image data, where 0-1 represent zero to 10 000 cd/m2. Highlights, in particular, can swing uncontrollably into extreme values as a result of small changes. Modern grading systems offer alternate grading operators tailored to the needs of HDR grading.” – Nick Shaw

S-Gamut3 or S-Gamut3Cine?

For Color Gamut for RAW Output, select either S-Gamut3/S-Log3 or S-Gamut3.Cine/S-Log3. For what it’s worth, S-Gamut3.Cine/S-Log3 is the most common color space used on Sony FX9, F55, FS7, F5 and FX6 Netflix productions (which also happen to use the P3-D65 color space in post-production). We’ve also been informed that key mastering engineers at Sony Pictures in Culver City recommend selecting the S-Gamut3.Cine/S-Log 3 settings in Resolve. S-Log3 is recorded as Full range. There is no option to select Legal range. RAW is sensor native data and does not apply any color space or log curve – only the color space metadata is saved. RAW is recorded as 16-bit linear which is then compressed and recorded in the Ninja V as 12-bit ProRes RAW. The color space is applied to the RAW data during the de-Bayer process.

Exposure is critical. Avoid blown-out highlights and overexposed skin at all costs.

Option #1: Transcode to cDNG with RAW Converter


In order to work with ProRes RAW footage in DaVinci Resolve, first you’ll want to convert it to cDNG. Download RAW Convertor from the Apple App store.

Project settings:

After dropping the folders onto the timeline of the Edit page, you’ll be able to make adjustments to the clips in the Camera RAW tab on the color page of DaVinci Resolve. If you’re grading with an OLED TV, we recommend reducing exposure beforehand to avoid frying your display!

Here’s a comparison between lossless, 3:1 and 5:1 cDNG:

Option #2: Transcode to ProRes 4444 XQ with a LUT using Apple Compressor (display referred)

Import your ProRes RAW files into Apple Compressor and select ProRes 4444 XQ.


Compressor will automatically choose S-Log3/S-Gamut3. If you would prefer to work with S-Log3/S-Gamut3.Cine, in the video tab, select S-Log3/S-Gamut3.Cine for the RAW to Log conversion. Compressor will then automatically choose S-Log3/S-Gamut3.Cine for the Camera LUT.

Here are the project settings for ProRes 4444 XQ in DaVinci Resolve 18. Set ‘HDR mastering is for 800 nits’ if grading with an LG OLED TV, or to 1,000 nits if using a MacBook Pro (2021) or external monitor capable of 1,000 nits.

And here are the advanced RCM settings for working with ProRes 4444 XQ with a LUT.

Note: Only limit output gamut to P3-D65 if you’re working with a display calibrated to P3-D65.

Option #3: Transcode to ProRes 4444 XQ without a LUT using Assimilate Player Pro

To transcode from ProRes RAW to ProRes 4444 XQ without a LUT, download and install Assimilate Player Pro.

Open Assimilate Player Pro and import the folder where the clips are located.

Hit the Render button, choose ‘Same as Source’ for the output, Apple ProRes 4444 XQ, uncheck ‘Use Display LUT’, enable ‘Use Source Name’, then press ‘Yes’.

The DaVinci Resolve Studio project settings can be found in part one of the guide.


RED Komodo Workflow

RED: Exposure Strategy

The histogram

Our goal should be to avoid clipping in the highlights while maintaining acceptable noise levels. The principal tool for judging exposure levels is the histogram, which shows the precise luminance levels of the red, green, and blue pixels after setting the ISO and white balance. Monitors, while indispensible, are less than ideal for gauging brightness. The histogram allows us to see how brightness is distributed throughout the image and how near the shadows and highlights are to clipping. However, the histogram alone is incapable of indicating correct exposure: it can’t warn us of areas in the image that are about to, or that have already clipped, which is where the false color tools and traffic lights come in.

False Color Modes

The RED Komodo has two false color modes: false color exposure and false color video. The false color exposure mode alerts us to clipping in the highlights (indicated in red) and clipping in the shadows (purple) in addition to indicating 18% gray (green). There is often a great deal of exposure latitude here before an excessive amount of purple or red occurs.

Image courtesy of RED.com

False color video mode furnishes us with more granular information regarding brightness levels in different areas of the image but should be used at ISO 800 and above for the most accurate results. Its colors are based on IRE values, not the RAW data and is useful for evaluating lighting and for adjusting a suggested look prior to sending footage off to post-production. There are nine levels, with green representing 18% gray, pink corresponding with typical Caucasian skintones, teal signifying textured shadows and straw, yellow and orange becoming ever closer to white, while the remaining colors indicate the tonal extremities.

Traffic Lights

The red, green and blue dots, commonly referred to as the traffic lights, inform us of when a particular color channel is clipping. When around 2% of the pixels for an individual color channel have become clipped, the corresponding traffic light will be triggered. Sitting between the highlight and shadow traffic lights are the goal posts. The fullest extent of each goal post represents a quarter of all the image pixels for a particular color channel. Ordinarily, the shadow goal posts can be raised by as much as 50% and still yield acceptable noise levels: but even a miniscule amount in the highlight goal posts can spell disaster, especially in HDR.

Glint of sunlight on the phone: a situation where clipping of highlights is unavoidable.

RED Exposure Strategy in Practice

ISO

Because so much of HDR’s dynamic range is located in the shadows where noise tends to be most prevalent, and as noise is far more distracting in HDR than in SDR, our first suggestion is to cut the manufacturer’s overly optimistic base ISO rating in half, then go ahead and bump the ISO to 800 or higher for bright scenes with lots of important highlight detail.

Exposure

Our recommendation is to use the histogram and traffic lights and expose to the right by opening up the iris until the lights turn on, then gradually reducing exposure until they disappear, while making sure that the histogram isn’t all bunched up to the right. The advantages are being able to quickly arrive at the optimal exposure with no guesswork, shallower depth of field and less noise in the image. One downside of ETTR is that it we’re no longer dealing with WYSIWYG: it can be difficult to evaluate the image when exposing to the right.

“Shooting RAW/Log is not about producing a final image, it’s about producing a “digital negative” that has great potential further down the line.” – Blain Brown Photo: Mark Toia

In addition to adjusting the aperture to control the amount of light striking the sensor, RED has another trick up its sleeve: ISO can be used to precisely determine the number of stops of dynamic range/latitude above and below middle gray. If you’re shooting outdoors in bright light, ISO 800 will probably give you the best balance between highlight and shadow latitude; if you’re indoors or working in low contrast lighting, a lower ISO, which allocates more dynamic range to the shadows, reducing the appearance of noise, is preferable. At the same time, if you’re working on a film, you don’t want to have wild swings in ISO, as that could introduce jarring changes in texture.

Another thing to keep in mind is that, while REDCODE RAW has tremendous latitude, at lower ISOs, you’ve got less highlight protection, and correct exposure becomes more critical. Nevertheless, the ability to redistribute latitude/dynamic range above and below middle gray offers creative control that we would never want to relinquish, and it baffles us that many stubbornly keep their ISO permanently fixed at 800. It goes without saying that this is just how we work; if you prefer to use a light meter, waveform monitor, zebras or just eyeball exposure using a calibrated monitor, those are all valid ways of working, too. Lastly, don’t let anyone tell you they’ve done all the tests themselves so you don’t have to: everyone should be performing their own tests to determine what works best for them.

High or Low Quality?

Regarding the HQ, MQ and LQ settings on the RED Komodo, Jarred Land had this to say:

The only reason you should select low quality is […] to record for a very long time [or] because you have a limited amount of cards […]. The medium shouldn’t even really be there, but you know, people would probably revolt if we [didn’t include it] … You should just be shooting on high quality all the time…

Shooting R3D HQ to a 512GB CFast card means having a mere 30 minutes of record time, so we’ll have to offload footage more frequently should we decide to go that route. Most comparisons of bitrate deal with subject matter like fast moving water and tree branches but we were more interested in seeing how bitrate affected the subtle tonal transitions in the blurry backgrounds of our model shoots. From the couple of tests we’ve done so far, shooting R3D HQ, together with halving the recommended ISO to 400 while making sure shadows aren’t buried in the noise floor really does result in cleaner looking, more detailed footage. For our very first back-to-back comparisons between HQ and LQ, we chose the Meike 75mm T2.1 s35 cinema lens for its nice, creamy rendering of out-of-focus areas. In DaVinci Resolve, when pausing on any individual frame, the LQ clips took on an ever so slightly coarser appearance, and once we began playing the clips, the noise seemed to be ‘chewing away’ at some of the tonal transitions. This was only visible when pixel peeping on our LG 55CX, not on our 16″ MacBook Pro. Sensor windowing, such as when shooting anamorphic, also impacts data rates, and we hope to do more testing once we’ve picked up an anamorphic lens.

R3D HQ vs R3D LQ

RED Komodo Project Settings

Go to Preferences > Decode Options and change Use GPU for R3D from Debayer to Decompression and Debayer. You will be prompted to restart your computer.
Camera RAW settings.
Camera RAW controls on the Color Page.

Timeline and delivery resolution

For those shooting 6K 17:9 and delivering for UHD displays, here are the timeline and render resolution settings.

DaVinci YRGB Color Management Settings

Only limit output gamut to P3-D65 if you are working with a display calibrated to P3. Set ‘HDR mastering is for’ according to the capabilities of your mastering display.

Node based workflow with RED output transform LUT

Color Management settings.
Download and install REDCINE-X PRO.
Go to File > IPP2 LUT Creator.
Entering the values as shown will result in a flat, log-like image with a great deal of flexibility in post.
Open LUT folder on Color Management page of Resolve.
Place the LUT inside the RED folder.
Set 3D Lookup Table Interpolation to Tetrahedral and update lists.
Apply the LUT to the last node in your color pipeline.

An easier method is to place the the node with the RED Output Transform LUT on the timeline level, which will affect all clips in the project.

Click on Clip and select Timeline.
Hit Option + S, then apply the RED Output Transform LUT to the node on the timeline level.

In order to limit gamut to P3-D65, create a new node directly after the RED Output Transform LUT by dragging and dropping the Gamut Limiter from Resolve FX Color. For Current Gamut, select Rec2020, for Current Gamma select ST2084 1,000 nits, then Limit Gamut to P3-D65.

Node based workflow with Resolve OFX Color Space Transform

Color Management page settings. Set ‘HDR mastering is for’ according to the capabilities of the mastering display.
Settings, Color Page. To restrict output color space to P3-D65, add the gamut limiter to the right of the CST.

Re: Timeline Color Space

When it comes to post-processing, using the fewest number of transforms and working at the highest available bit depth in a native color space greater than the display color space are key to preserving the greatest image quality. So, should you be changing your timeline color space to DaVinci Wide Gamut Intermediate or not?

To begin with, presumably you’ve chosen a camera for its particular look and changing the working color space in post will inevitably result in deviating from that look, sometimes subtly, sometimes not so subtly, and often in ways that aren’t always apparent until your favorite creative LUT developed specifically for that camera is applied. What selecting DaVinci Wide Gamut Intermediate as the working color space is not going to do is magically improve image quality, reveal colors that weren’t already captured at the outset, or make the picture any more cinematic looking. That being the case, why would anyone choose, say, DaVinci Wide Gamut rather than REDWideGamutRGB/Log3G10 for their timeline color space?

If you’re a professional colorist handling footage from a number of different cameras on the same timeline with deliverables targeting everything from social media and streaming networks to broadcast and theatrical release, mapping everything into a unified color space makes perfect sense. It elegantly solves the problem of mismatched color spaces, simplifies VFX round-tripping, enables using the same LUTs across all shots and ensures that NLE controls behave consistently across all of the clips on the timeline. If we were working on a multi-camera shoot however, we much prefer Walter Volpatto’s approach:

“Unless you’re doing a documentary or a show that has way too many cameras you have a hero camera: can be an Alexa, can be a Sony, can be a Red, a Blackmagic, a Canon, you name it. You have a hero camera, and usually you have a hero logarithmic color space. So the idea is that that is the color space that I use for all the color corrections. If a shot belonged to a different camera then I do a color space transform to bring that specific shot to the logarithmic”.

Walter Volpatto

So, if you’re dealing with a single-camera project, simply choosing the camera’s own native color space for the timeline and adding an output transform to the display color space as the very last operation is entirely adequate, avoiding unnecessary complexity and image manipulations that could even degrade the image. In the Custom mode of RCM, even experienced colorists are at a loss as to how each and every one of the settings impacts image quality! Do note that if you’re using a plugin like Dehancer or if you want to use any of Cullen Kelly’s LUTs, you’ve got no choice but to select DaVinci Wide Gamut Intermediate as the working color space.

Save trimmed R3D files

Go to File > Media Management > Press Start

Mission completed! The .RDC folder containing the R3Ds and .rtn as well as the .drt have been saved.

19 thoughts on “Monster Guide: HDR10 in Resolve Studio 18 (Part V)

Add yours

  1. Wondering why the RAW transcode defaults to Rec 2020PQ. Have you tried any other settings like not changing it from SLOG3/SGamut i.e. what came from the camera? Let Resolve handle it?

  2. In your comparison of XAVC-SI vs Raw, you used an A7Siii. When shooting with an FX6, a lot of the those problems, like not being able to turn off noise reduction, don’t exist. And from many tests online, they all seem to show that shooting internal XAVC-I is better than shooting Raw, at least in the case of the FX6.

    https://www.cined.com/sony-fx6-lab-test-external-prores-raw-vs-internal-xavc-intra/

    Knowing this, would the FX6 be okay with shooting internally rather than raw in an HDR workflow?

      1. Thank you for the quick reply!
        I’d love to do some side-by-side test of the 2 with my camera.

        What would be a good way about setting up a scene to shoot to test this?

      2. I’m sorry, but I’m no authority when it comes to setting up test scenes. You can always examine your clips side-by-side on the timeline of any NLE that supports PRR if you like. Nevertheless, no matter which camera is used, Y’CbCr is inferior to RGB 4:4:4. Streaming networks also avoid Y’CbCr because it is not suitable for HDR, choosing ICtCp instead. Yet another reason is that, when shooting log, the quality of the file is completely at the mercy of the camera’s internal encoder, meaning that it’s not possible to improve upon this or change the gamma/color space at some point in the future; whereas with raw, the image quality is determined by the debayering process in post, giving you the ability to re-encode the footage with a higher quality encoder when one becomes available. Furthermore, should you ever decide to upgrade to Dolby Vision, you will definitely want 12-bit files, not 10-bit. Lastly, you never want to convert one 10-bit format into another 10-bit format.

      3. Thank you for the insight!

        After a lot of headaches trying to get Prores raw files into a format that is compatible with Resolve on a windows system, I can now confirm that side by side, the raw files are indeed much better for HDR grading and have way more information in the highlights

        As much as I love the convenience and how well the internal XAVC-I codec on the FX6 is for grading SDR content, I’ll now be filming in Raw for all footage that will be delivered in HDR

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑