Monster Guide: HDR10 in Resolve Studio 19 (Part V)

Part I: Project settings

Part II: BT.2020 or P3-D65 Limited?

Part III: HDR Palette, project render settings, generate MaxCLL and MaxFall HDR10 metadata

Part IV: HDR to SDR Conversion LUT for YouTube

Part V: RAW

Part VI: The Grade

Part VII: The State of HDR Film Emulation LUTs & Plugins

Part VIII: Why HDR Production Monitors Matter

Avoid Y’CbCr
For HDR projects, steer clear of recording XAVC S-I 4:2:2 internal if at all possible. Art Adams explains:

“The Y’CbCr encoding model is popular because it conceals subsampling artifacts vastly better than does RGB encoding. Sadly, while Y’CbCr works well in Rec 709, it doesn’t work very well for HDR. Because the Y’CbCr values are created from RGB values that have been gamma corrected, the luma and chroma values are not perfectly separate: subsampling causes minor shifts in both. This isn’t noticeable in Rec 709’s smaller color gamut, but it matters quite a lot in a large color gamut. Every process for scaling a wide color gamut image to fit into a smaller color gamut utilizes desaturation, and it’s not possible to desaturate Y’CbCr footage to that extent without seeing unwanted hue shifts. My recommendation: always use RGB 4:4:4 codecs or capture raw when shooting for HDR, and avoid Y’CbCr 4:2:2 codecs. If a codec doesn’t specify that it is “4:4:4” then it uses Y’CbCr encoding, and should be avoided.”

This is the consensus of industry professionals, including post production houses, who prefer to see RAW for HDR projects. This is also why streaming platforms like Netflix use the ICtCp color space.

Benefits of Shooting 8K

However, if you take 8K Y’CbCr encoded footage and downconvert to 4K, color resolution increases from 4:2:0/4:2:2 to 4:4:4. Panasonic explains how downconverting increases color resolution, using the EVA1 as an example:

“One excellent benefit of downconverting UHD/4K footage to 1080 HD in post is that you can realize an increase in proportional color resolution. The EVA1 is capable of recording 4K or UHD footage using 4:2:2 color sampling, but when employing frame rates faster than 30 frames per second the recording is reduced to 8 bits per pixel and utilizes 4:2:0 color sampling. If you’re going to be delivering in HD, you’ll be downconverting your UHD/4K footage. After downconversion, the 4:2:0 UHD/4K footage will become HD footage with 4:4:4 color sampling. You can convert 3840×2160 8-bit 4:2:0 recorded footage into 1920×1080 4:4:4 footage in post.”

“To understand the color sampling advantage, you’d have to first un derstand that the camera records some of its footage in 4:2:2 and, depending on the resolution and frame rate and codec bitrate, it may record some of its footage in 4:2:0 color sampling. 4:2:0 means (simply put) that there is one color sample for every 2×2 block of pixels. In any given 2×2 block of pixels there are four different “brightness” samples, but they all share one “color” sample. Effectively, within the 3840 x 2160 frame, there is a 1920 x 1080 matrix of color samples, one for every 2×2 block of pixels. During the downconversion to HD, each block of 2×2 brightness samples are converted into one HD pixel, creating a 1920 x 1080 matrix of brightness pixels. This 1920 x 1080 “brightness” (luminance) matrix can be effectively married to the originally-recorded 1920 x 1080 “color” matrix, resulting in one individual and unique color sample for each and every brightness pixel. The result is effectively 4:4:4 color sampling at high-definition resolution.”

Shoot 12-bit or higher
Just the same, there are good arguments for preferring 12-bit RAW to 10-bit Log. Matthew Bilodeau explains:

“… if you’re considering which codecs to use as intermediates for HDR work, especially if you’re planning on an SDR down-grade from these intermediates, 12 bits per channel as a minimum is important.  I don’t want to get sidetracked into the math behind it, but just a straight cross conversion from PQ HDR into SDR loses about ½ bit of precision in data scaling, and another ¼ – ½ bit precision in redistributing the values to the gamma 2.4 curve, leaving a little more 1 bit of precision available for readjusting the contrast curve (these are not uniform values).  So, to end up with an error-free 10 bit master (say, for UHD broadcast) you need to encode 12 bits of precision into your HDR intermediate.”

Reducing bit depth from 12 to 10 bits also requires a considerably greater 3D-LUT size to obtain the same image quality (fewer interpolation errors).

“Converting one 10-bit format into another 10-bit format will reduce the performance of the signal and should be avoided whenever possible.” – HDR: A Guide to High Dynamic Range Operation for Live Broadcast Applications, Grass Valley

Linear raw excels at capturing highlight information and what you will find is that compared to log there will be more textures in highlights and brighter parts of your captured scenes. This will become more and more important as HDR screens are better able to show highlights correctly. Current standard dynamic range displays don’t show highlights well, so often the extra highlight data in raw is of little benefit over log. But that’s going to change in the next few years so linear recording with it’s extra highlight information will become more and more important.

Generation Media

RAW is future-proof
Yet another seldom discussed advantage of RAW is that, when shooting log, the quality of the file is completely at the mercy of the camera’s internal encoder, meaning that it’s not possible to improve upon this or change the gamma/color space at some point in the future; whereas with RAW, the image quality is determined by the debayering process in post, giving you the ability to re-encode the footage with a higher quality encoder when one becomes available. 

In our own experience, and from watching countless videos uploaded by others, RAW will give significantly better results and there’s no way we’d go back to shooting Log for HDR. Not to mention that a7s III 4.2K ProRes RAW files contain nearly 25% more resolution, with no baked-in denoising, lens corrections, in-camera sharpening or other enhancements applied.

Update, 28.03.2024. Sony issued a firmware update for the a7s III unlocking 4K DCI (4096 X 2160).

Click on image below to enlarge.

Noise

HDR tends to amplify existing noise in the image, particularly in shadow areas. De-noising in post with either Resolve’s own built-in denoising software or Neat Video will give better results.

From best to worst

“If you’re using a prosumer or entry level professional camera, taking a few preparatory steps to set up how you’ll actually be capturing the image can mean the difference between getting footage that can be used in HDR mastering, vs. footage that can’t.

  1. 16 bpc > 14 bpc > 12 bpc > 10 bpc > 8 bpc: 10 bpc should be your minimum spec.
  2. RAW > LOG > Linear > Gamma 2.4: avoid baking in your gamma at all costs!
  3. Camera Native / BT.2020 > DCI-P3 > Rec. 709 color primaries
  4. RAW > Compressed RAW > Intraframe compression (ProRes, DNxHR, AVC/HEVC Intra) > Interframe compressed (AVC/H.264, HEVC).” – Samuel Bilodeau, Mystery Box

Flexibility in post
Mark Spencer demonstrates how, with the same extreme grade applied, the Sony internal XAVC file falls apart much more easily than the ProRes RAW one:


Lift, gamma, gain
It should be mentioned here that traditional lift/gamma/gain wheels are unsuitable for grading HDR, just one reason why DaVinci Resolve, with its HDR Color Palette, is preferable to Final Cut Pro.

“It is also worth bearing in mind that the traditional lift, gamma, and gain operators are particularly unsuitable for acting on ST 2084 image data, where 0-1 represent zero to 10 000 cd/m2. Highlights, in particular, can swing uncontrollably into extreme values as a result of small changes. Modern grading systems offer alternate grading operators tailored to the needs of HDR grading.” – Nick Shaw

S-Gamut3 or S-Gamut3Cine?

For Color Gamut for RAW Output, select either S-Gamut3/S-Log3 or S-Gamut3.Cine/S-Log3. For what it’s worth, S-Gamut3.Cine/S-Log3 is the most common color space used on Sony FX9, F55, FS7, F5 and FX6 Netflix productions (which also happen to use the P3-D65 color space in post-production). We’ve also been informed that key mastering engineers at Sony Pictures in Culver City recommend selecting the S-Gamut3.Cine/S-Log 3 settings in Resolve. S-Log3 is recorded as Full range. There is no option to select Legal range. RAW is sensor native data and does not apply any color space or log curve – only the color space metadata is saved. RAW is recorded as 16-bit linear which is then compressed and recorded in the Ninja V as 12-bit ProRes RAW. The color space is applied to the RAW data during the de-Bayer process.

Exposure is critical. Avoid blown-out highlights and overexposed skin at all costs.

Option #1: Transcode to CinemaDNG with RAW Converter
In order to work with ProRes RAW footage in DaVinci Resolve, first you’ll want to convert it to cDNG. Download RAW Convertor from the Apple App store. RAW Convertor is compatible with both macOS and Windows.

Project settings:

After dropping the folders onto the timeline of the Edit page, you’ll be able to make adjustments to the clips in the Camera RAW tab on the color page of DaVinci Resolve. If you’re grading with an OLED TV, we recommend reducing exposure beforehand to avoid frying your display!

Here’s a comparison between lossless, 3:1 and 5:1 cDNG:

Option #2 (not recommended): Transcode to ProRes 4444 XQ with a LUT using Apple Compressor (display referred)

Import your ProRes RAW files into Apple Compressor and select ProRes 4444 XQ.


Compressor will automatically choose S-Log3/S-Gamut3. If you would prefer to work with S-Log3/S-Gamut3.Cine, in the video tab, select S-Log3/S-Gamut3.Cine for the RAW to Log conversion. Compressor will then automatically choose S-Log3/S-Gamut3.Cine for the Camera LUT.

Here are the project settings for ProRes 4444 XQ in DaVinci Resolve 18. Set ‘HDR mastering is for 800 nits’ if grading with an earlier LG OLED TV, or to 1,000 nits if using a MacBook Pro (2021) or external monitor capable of 1,000 nits.

And here are the advanced RCM settings for working with ProRes 4444 XQ with a LUT.

The DRT settings are completely up to user preference. Setting Input and Output DRTs to DaVinci is a popular choice.

Option #3 (preferred): Transcode to ProRes 4444 XQ without a LUT or CinemaDNG using Assimilate Play Pro Studio

Download and install Assimilate Play Pro Studio. Play Pro Studio is available for macOS and Windows. When going from ProRes RAW to CinemaDNG it supports all cameras and recorders without exception.

Open Assimilate Play Pro Studio and select the folder where the clips are located.

To convert to Apple Pro Res, hit the Render button, choose ‘Same as Source’ for the output, Apple ProRes 4444 XQ, uncheck ‘Use Display LUT’, choose an output folder, enable ‘Use Source Name’, then press ‘Yes’.

The DaVinci Resolve Studio project settings can be found in part one of the guide.

To convert ProRes RAW to CinemaDNG, hit the Render button, set Output to CinemaDNG and select the compression. Once you’ve selected the compression setting, hit the Browse button, select your output folder and enable Use Source Name. Now, hit OK and wait for Play Pro Studio to finish processing the cDNG files. Use the same project settings on the color management page as with RAW Convertor above.


RED Komodo Workflow

RED: Exposure Strategy

The histogram

Our goal should be to avoid clipping in the highlights while maintaining acceptable noise levels. The principal tool for judging exposure levels is the histogram, which shows the precise luminance levels of the red, green, and blue pixels after setting the ISO and white balance. Monitors, while indispensible, are less than ideal for gauging brightness. The histogram allows us to see how brightness is distributed throughout the image and how near the shadows and highlights are to clipping. However, the histogram alone is incapable of indicating correct exposure: it can’t warn us of areas in the image that are about to, or that have already clipped, which is where the false color tools and traffic lights come in.

False Color Modes

The RED Komodo has two false color modes: false color exposure and false color video. The false color exposure mode alerts us to clipping in the highlights (indicated in red) and clipping in the shadows (purple) in addition to indicating 18% gray (green). There is often a great deal of exposure latitude here before an excessive amount of purple or red occurs.

Image courtesy of RED.com

False color video mode furnishes us with more granular information regarding brightness levels in different areas of the image but should be used at ISO 800 and above for the most accurate results. Its colors are based on IRE values, not the RAW data and is useful for evaluating lighting and for adjusting a suggested look prior to sending footage off to post-production. There are nine levels, with green representing 18% gray, pink corresponding with typical Caucasian skintones, teal signifying textured shadows and straw, yellow and orange becoming ever closer to white, while the remaining colors indicate the tonal extremities.

Traffic Lights

The red, green and blue dots, commonly referred to as the traffic lights, inform us of when a particular color channel is clipping. When around 2% of the pixels for an individual color channel have become clipped, the corresponding traffic light will be triggered. Sitting between the highlight and shadow traffic lights are the goal posts. The fullest extent of each goal post represents a quarter of all the image pixels for a particular color channel. Ordinarily, the shadow goal posts can be raised by as much as 50% and still yield acceptable noise levels: but even a miniscule amount in the highlight goal posts can spell disaster, especially in HDR.

Glint of sunlight on the phone: a situation where clipping of highlights is unavoidable.

RED Exposure Strategy in Practice

ISO

Because so much of HDR’s dynamic range is located in the shadows where noise tends to be most prevalent, and as noise is far more distracting in HDR than in SDR, our first suggestion is to cut the manufacturer’s overly optimistic base ISO rating in half, then go ahead and bump the ISO to 800 or higher for bright scenes with lots of important highlight detail.

Here’s what the Komodo’s noise looks like at various ISOs.

Exposure

Our recommendation is to use the false color exposure tool and reduce exposure until clipped portions of the image denoted in red disappear. The advantages are being able to quickly arrive at the optimal exposure with no guesswork, shallower depth of field and less noise in the image. One downside of ETTR is that we’re no longer dealing with WYSIWYG: it can be difficult to evaluate the image when exposing to the right.

“Shooting RAW/Log is not about producing a final image, it’s about producing a “digital negative” that has great potential further down the line.” – Blain Brown Photo: Mark Toia

Filmmaker: With the early Red cameras—back in the Epic days—I remember people talking about how you needed to expose to the right on the histogram because if you underexposed you were in trouble. As Reds have evolved, do you not have to worry as much about the low end of the curve anymore?

Erik Messerschmidt: No, I just protect the highlights. I feel like the camera has detail for days in the shadows. I do subscribe generally to the “expose to the right” approach, though. I didn’t use to, but I’ve turned around on that just because I feel like the color fidelity is superior if you put a little bit more light onto the sensor. I have that same opinion for every digital camera, not just Red. 

Filmmaker Magazine

In addition to adjusting the aperture to control the amount of light striking the sensor, RED has another trick up its sleeve: ISO can be used to precisely determine the number of stops of dynamic range/latitude above and below middle gray. If you’re shooting outdoors in bright light, ISO 800 will probably give you the best balance between highlight and shadow latitude; if you’re indoors or working in low contrast lighting, a lower ISO, which allocates more dynamic range to the shadows, reducing the appearance of noise, is preferable. At the same time, if you’re working on a film, you don’t want to have wild swings in ISO, as that could introduce jarring changes in texture.

Another thing to keep in mind is that, while REDCODE RAW has tremendous latitude, at lower ISOs, you’ve got less highlight protection, and correct exposure becomes more critical. Nevertheless, the ability to redistribute latitude/dynamic range above and below middle gray offers creative control that we would never want to relinquish, and it baffles us that many stubbornly keep their ISO permanently fixed at 800. It goes without saying that this is just how we work: if you prefer to use a light meter, waveform monitor, zebras or just eyeball exposure using a calibrated monitor, those are all valid ways of working, too. Lastly, don’t let anyone tell you they’ve done all the tests themselves so you don’t have to: everyone should be performing their own tests to determine what works best for them.

How to Normalize ETTR Footage In Post

Image credit: Soumyadip (Roni) Sengupta and Mykhailo (Misha) Shvets

When exposing to the right (ETTR), we wholeheartedly recommend normalizing exposure by creating a convex curve like so:

High or Low Quality?

Regarding the HQ, MQ and LQ settings, on a live stream about the RED Komodo, Jarred Land had this to say:

The only reason you should select low quality is […] to record for a very long time [or] because you have a limited amount of cards […]. The medium shouldn’t even really be there, but you know, people would probably revolt if we [didn’t include it] … You should just be shooting on high quality all the time…

But on a live stream at the time of the launch of the Komodo X, he said,

I always use ELQ and I’m supposed to tell people not to use ELQ on anything important… ELQ is awesome… if you don’t have too much detail. I really love it.

For TV, online content, documentary and interviews, RED’s own website recommends the LQ setting. Sensor windowing adversely impacts data rates, and we unhesitatingly recommend HQ for anamorphic shooting.

R3D HQ vs R3D LQ

RED Komodo Project Settings

Go to Preferences > Decode Options and change Use GPU for R3D from Debayer to Decompression and Debayer. You will be prompted to restart your computer.
Camera RAW settings.
Camera RAW controls on the Color Page.

Timeline and delivery resolution

For those shooting 6K 17:9 and delivering for UHD displays, here are the timeline and render resolution settings.

Node based workflow with RED output transform LUT

We recommend a node-based color managed workflow, as RCM IPP2 causes lifted blacks.

Color Management settings.
Download and install REDCINE-X PRO.
Go to File > IPP2 LUT Creator.
Open LUT folder on Color Management page of Resolve.
Place the LUT inside the RED folder.
Set 3D Lookup Table Interpolation to Tetrahedral and update lists.
Apply the LUT to the last node in your color pipeline.

An easier method is to place the the node with the RED Output Transform LUT on the timeline level, which will affect all clips in the project.

Click on Clip and select Timeline.
Hit Option + S, then apply the RED Output Transform LUT to the node on the timeline level.

In order to limit gamut to P3-D65, create a new node directly after the RED Output Transform LUT by dragging and dropping the Gamut Limiter from Resolve FX Color. For Current Gamut, select Rec.2020, for Current Gamma select ST2084 1,000 nits, then Limit Gamut to P3-D65.

Save trimmed R3D files

Go to File > Media Management > Press Start

Mission completed! The .RDC folder containing the R3Ds and .rtn as well as the .drt have been saved.

19 thoughts on “Monster Guide: HDR10 in Resolve Studio 19 (Part V)

Add yours

  1. Wondering why the RAW transcode defaults to Rec 2020PQ. Have you tried any other settings like not changing it from SLOG3/SGamut i.e. what came from the camera? Let Resolve handle it?

  2. In your comparison of XAVC-SI vs Raw, you used an A7Siii. When shooting with an FX6, a lot of the those problems, like not being able to turn off noise reduction, don’t exist. And from many tests online, they all seem to show that shooting internal XAVC-I is better than shooting Raw, at least in the case of the FX6.

    https://www.cined.com/sony-fx6-lab-test-external-prores-raw-vs-internal-xavc-intra/

    Knowing this, would the FX6 be okay with shooting internally rather than raw in an HDR workflow?

      1. Thank you for the quick reply!
        I’d love to do some side-by-side test of the 2 with my camera.

        What would be a good way about setting up a scene to shoot to test this?

      2. I’m sorry, but I’m no authority when it comes to setting up test scenes. You can always examine your clips side-by-side on the timeline of any NLE that supports PRR if you like. Nevertheless, no matter which camera is used, Y’CbCr is inferior to RGB 4:4:4. Streaming networks also avoid Y’CbCr because it is not suitable for HDR, choosing ICtCp instead. Yet another reason is that, when shooting log, the quality of the file is completely at the mercy of the camera’s internal encoder, meaning that it’s not possible to improve upon this or change the gamma/color space at some point in the future; whereas with raw, the image quality is determined by the debayering process in post, giving you the ability to re-encode the footage with a higher quality encoder when one becomes available. Furthermore, should you ever decide to upgrade to Dolby Vision, you will definitely want 12-bit files, not 10-bit. Lastly, you never want to convert one 10-bit format into another 10-bit format.

      3. Thank you for the insight!

        After a lot of headaches trying to get Prores raw files into a format that is compatible with Resolve on a windows system, I can now confirm that side by side, the raw files are indeed much better for HDR grading and have way more information in the highlights

        As much as I love the convenience and how well the internal XAVC-I codec on the FX6 is for grading SDR content, I’ll now be filming in Raw for all footage that will be delivered in HDR

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑