Part I: Project settings, Project Render Settings, Generate MaxCLL and MaxFall HDR10 Metadata
Part II: RAW
Part III: The Grade
Part IV: The State of HDR Film Emulation LUTs & Plugins
Part V: Why HDR Production Monitors Matter
Avoid Y’CbCr
Limitations of Y’CbCr with wide colour gamut and high dynamic range
- Quantization distortions due to bit depth limitations with the increased colour volume.
- Chroma subsampling distortions due to a perceptually uneven distribution of code words.
- Colour volume mapping distortions due to incorrectly predicted hue and luminance.
- Error propagation from chroma to luma channels.
For HDR projects, steer clear of recording 4:2:2 internal if at all possible. Art Adams explains:
“The Y’CbCr encoding model is popular because it conceals subsampling artifacts vastly better than does RGB encoding. Sadly, while Y’CbCr works well in Rec 709, it doesn’t work very well for HDR. Because the Y’CbCr values are created from RGB values that have been gamma corrected, the luma and chroma values are not perfectly separate: subsampling causes minor shifts in both. This isn’t noticeable in Rec 709’s smaller color gamut, but it matters quite a lot in a large color gamut. Every process for scaling a wide color gamut image to fit into a smaller color gamut utilizes desaturation, and it’s not possible to desaturate Y’CbCr footage to that extent without seeing unwanted hue shifts. My recommendation: always use RGB 4:4:4 codecs or capture raw when shooting for HDR, and avoid Y’CbCr 4:2:2 codecs. If a codec doesn’t specify that it is “4:4:4” then it uses Y’CbCr encoding, and should be avoided.”
Benefits of Shooting 8K
However, if you take 8K Y’CbCr encoded footage and downconvert to 4K, color resolution increases from 4:2:0/4:2:2 to 4:4:4. In 4:4:4, the chrominance components (Cb and Cr) are sampled at the same rate as the luminance component (Y), meaning no information is lost or reduced in the color channels. Panasonic explains how downconverting increases color resolution, using the EVA1 as an example:
“One excellent benefit of downconverting UHD/4K footage to 1080 HD in post is that you can realize an increase in proportional color resolution. The EVA1 is capable of recording 4K or UHD footage using 4:2:2 color sampling, but when employing frame rates faster than 30 frames per second the recording is reduced to 8 bits per pixel and utilizes 4:2:0 color sampling. If you’re going to be delivering in HD, you’ll be downconverting your UHD/4K footage. After downconversion, the 4:2:0 UHD/4K footage will become HD footage with 4:4:4 color sampling. You can convert 3840×2160 8-bit 4:2:0 recorded footage into 1920×1080 4:4:4 footage in post.”
“To understand the color sampling advantage, you’d have to first un derstand that the camera records some of its footage in 4:2:2 and, depending on the resolution and frame rate and codec bitrate, it may record some of its footage in 4:2:0 color sampling. 4:2:0 means (simply put) that there is one color sample for every 2×2 block of pixels. In any given 2×2 block of pixels there are four different “brightness” samples, but they all share one “color” sample. Effectively, within the 3840 x 2160 frame, there is a 1920 x 1080 matrix of color samples, one for every 2×2 block of pixels. During the downconversion to HD, each block of 2×2 brightness samples are converted into one HD pixel, creating a 1920 x 1080 matrix of brightness pixels. This 1920 x 1080 “brightness” (luminance) matrix can be effectively married to the originally-recorded 1920 x 1080 “color” matrix, resulting in one individual and unique color sample for each and every brightness pixel. The result is effectively 4:4:4 color sampling at high-definition resolution.”
Shoot 12-bit or higher
“But the limit to RGB begins to hit its threshold when it comes to HDR mastering. The bare minimum for high dynamic range to be effective is 12-bit files, so when HDR is the target for mastering, with RGB files already working at their limit, raw files with more efficient compression and higher data rates and bit depths may have more data to work with that can be assigned to a larger, more malleable set of code values.” – Michael Cioni
Just the same, there are good arguments for preferring 12-bit RAW to 10-bit Log. Matthew Bilodeau explains:
“… if you’re considering which codecs to use as intermediates for HDR work, especially if you’re planning on an SDR down-grade from these intermediates, 12 bits per channel as a minimum is important. I don’t want to get sidetracked into the math behind it, but just a straight cross conversion from PQ HDR into SDR loses about ½ bit of precision in data scaling, and another ¼ – ½ bit precision in redistributing the values to the gamma 2.4 curve, leaving a little more 1 bit of precision available for readjusting the contrast curve (these are not uniform values). So, to end up with an error-free 10 bit master (say, for UHD broadcast) you need to encode 12 bits of precision into your HDR intermediate.”
Reducing bit depth from 12 to 10 bits also requires a considerably greater 3D-LUT size to obtain the same image quality (fewer interpolation errors).
“Converting one 10-bit format into another 10-bit format will reduce the performance of the signal and should be avoided whenever possible.” – HDR: A Guide to High Dynamic Range Operation for Live Broadcast Applications, Grass Valley
Linear raw excels at capturing highlight information and what you will find is that compared to log there will be more textures in highlights and brighter parts of your captured scenes. This will become more and more important as HDR screens are better able to show highlights correctly. Current standard dynamic range displays don’t show highlights well, so often the extra highlight data in raw is of little benefit over log. But that’s going to change in the next few years so linear recording with its extra highlight information will become more and more important.
Generation Media
Lift, gamma, gain
It should be mentioned here that traditional lift/gamma/gain wheels are unsuitable for grading HDR, just one reason why DaVinci Resolve, with its HDR Color Palette, is preferable to Final Cut Pro.
“It is also worth bearing in mind that the traditional lift, gamma, and gain operators are particularly unsuitable for acting on ST 2084 image data, where 0-1 represent zero to 10 000 cd/m2. Highlights, in particular, can swing uncontrollably into extreme values as a result of small changes. Modern grading systems offer alternate grading operators tailored to the needs of HDR grading.” – Nick Shaw
RED Komodo Workflow

RED: Exposure Strategy
The histogram
Our goal should be to avoid clipping in the highlights while maintaining acceptable noise levels. The principal tool for judging exposure levels is the histogram, which shows the precise luminance levels of the red, green, and blue pixels after setting the ISO and white balance. Monitors, while indispensible, are less than ideal for gauging brightness. The histogram allows us to see how brightness is distributed throughout the image and how near the shadows and highlights are to clipping. However, the histogram alone is incapable of indicating correct exposure: it can’t warn us of areas in the image that are about to, or that have already clipped, which is where the false color tools and traffic lights come in.

False Color Modes
The RED Komodo has two false color modes: false color exposure and false color video. The false color exposure mode alerts us to clipping in the highlights (indicated in red) and clipping in the shadows (purple) in addition to indicating 18% gray (green). There is often a great deal of exposure latitude here before an excessive amount of purple or red occurs.

False color video mode furnishes us with more granular information regarding brightness levels in different areas of the image but should be used at ISO 800 and above for the most accurate results. Its colors are based on IRE values, not the RAW data and is useful for evaluating lighting and for adjusting a suggested look prior to sending footage off to post-production. There are nine levels, with green representing 18% gray, pink corresponding with typical Caucasian skintones, teal signifying textured shadows and straw, yellow and orange becoming ever closer to white, while the remaining colors indicate the tonal extremities.

Traffic Lights
The red, green and blue dots, commonly referred to as the traffic lights, inform us of when a particular color channel is clipping. When around 2% of the pixels for an individual color channel have become clipped, the corresponding traffic light will be triggered. Sitting between the highlight and shadow traffic lights are the goal posts. The fullest extent of each goal post represents a quarter of all the image pixels for a particular color channel. Ordinarily, the shadow goal posts can be raised by as much as 50% and still yield acceptable noise levels: but even a miniscule amount in the highlight goal posts can spell disaster, especially in HDR.

RED Exposure Strategy in Practice
ISO
Because so much of HDR’s dynamic range is located in the shadows where noise tends to be most prevalent, and as noise is far more distracting in HDR than in SDR, our first suggestion is to cut the manufacturer’s overly optimistic base ISO rating in half, then go ahead and bump the ISO to 800 or higher for bright scenes with lots of important highlight detail.
Here’s what the Komodo’s noise looks like at various ISOs.




Exposure
Our recommendation is to use the false color exposure tool and reduce exposure until clipped portions of the image denoted in red disappear. The advantages are being able to quickly arrive at the optimal exposure with no guesswork, shallower depth of field and less noise in the image. One downside of ETTR is that we’re no longer dealing with WYSIWYG: it can be difficult to evaluate the image when exposing to the right.

Filmmaker: With the early Red cameras—back in the Epic days—I remember people talking about how you needed to expose to the right on the histogram because if you underexposed you were in trouble. As Reds have evolved, do you not have to worry as much about the low end of the curve anymore?
Erik Messerschmidt: No, I just protect the highlights. I feel like the camera has detail for days in the shadows. I do subscribe generally to the “expose to the right” approach, though. I didn’t use to, but I’ve turned around on that just because I feel like the color fidelity is superior if you put a little bit more light onto the sensor. I have that same opinion for every digital camera, not just Red.
Filmmaker Magazine
In addition to adjusting the aperture to control the amount of light striking the sensor, RED has another trick up its sleeve: ISO can be used to precisely determine the number of stops of dynamic range/latitude above and below middle gray. If you’re shooting outdoors in bright light, ISO 800 will probably give you the best balance between highlight and shadow latitude; if you’re indoors or working in low contrast lighting, a lower ISO, which allocates more dynamic range to the shadows, reducing the appearance of noise, is preferable. At the same time, if you’re working on a film, you don’t want to have wild swings in ISO, as that could introduce jarring changes in texture.
Another thing to keep in mind is that, while REDCODE RAW has tremendous latitude, at lower ISOs, you’ve got less highlight protection, and correct exposure becomes more critical. Nevertheless, the ability to redistribute latitude/dynamic range above and below middle gray offers creative control that we would never want to relinquish, and it baffles us that many stubbornly keep their ISO permanently fixed at 800. It goes without saying that this is just how we work: if you prefer to use a light meter, waveform monitor, zebras or just eyeball exposure using a calibrated monitor, those are all valid ways of working, too. Lastly, don’t let anyone tell you they’ve done all the tests themselves so you don’t have to: everyone should be performing their own tests to determine what works best for them.
How to Normalize ETTR Footage In Post

For footage exposed to the right (ETTR), we recommend normalizing exposure by creating a convex curve like so:

High or Low Quality?
Regarding the HQ, MQ and LQ settings, on a live stream about the RED Komodo, Jarred Land had this to say:
The only reason you should select low quality is […] to record for a very long time [or] because you have a limited amount of cards […]. The medium shouldn’t even really be there, but you know, people would probably revolt if we [didn’t include it] … You should just be shooting on high quality all the time…
But on a live stream at the time of the launch of the Komodo X, he said,
I always use ELQ and I’m supposed to tell people not to use ELQ on anything important… ELQ is awesome… if you don’t have too much detail. I really love it.
For TV, online content, documentary and interviews, RED’s own website recommends the LQ setting. Sensor windowing adversely impacts data rates, and we unhesitatingly recommend HQ for anamorphic shooting.


RED Komodo Project Settings




Timeline and delivery resolution
For those shooting 6K 17:9 and delivering for UHD displays, here are the timeline and render resolution settings.


Node based workflow with RED output transform LUT
We recommend a node-based color managed workflow.








An easier method is to place the the node with the RED Output Transform LUT on the timeline level, which will affect all clips in the project.


In order to limit gamut to P3-D65, create a new node directly after the RED Output Transform LUT by dragging and dropping the Gamut Limiter from Resolve FX Color. For Current Gamut, select Rec.2020, for Current Gamma select ST2084 1,000 nits, then Limit Gamut to P3-D65.

Save trimmed R3D files
Go to File > Media Management > Press Start
Mission completed! The .RDC folder containing the R3Ds and .rtn as well as the .drt have been saved.


Wondering why the RAW transcode defaults to Rec 2020PQ. Have you tried any other settings like not changing it from SLOG3/SGamut i.e. what came from the camera? Let Resolve handle it?
Apple Compressor is applying the manufacturer-specified processing. Your other choices are rec.601, rec.709, rec.2020, rec.2020 HLG and P3D65 PQ. There is no advantage to not applying the S-Log3/S-Gamut3 Camera LUT in Compressor. You can learn more about ProRes RAW here: https://www.apple.com/final-cut-pro/docs/Apple_ProRes_RAW.pdf
All RAW clips are by definition HDR.
In your comparison of XAVC-SI vs Raw, you used an A7Siii. When shooting with an FX6, a lot of the those problems, like not being able to turn off noise reduction, don’t exist. And from many tests online, they all seem to show that shooting internal XAVC-I is better than shooting Raw, at least in the case of the FX6.
https://www.cined.com/sony-fx6-lab-test-external-prores-raw-vs-internal-xavc-intra/
Knowing this, would the FX6 be okay with shooting internally rather than raw in an HDR workflow?
ProRes RAW has less processing and more color information compared to internal XAVC SI. Avoid 10-bit YCbCr at all costs.
Thank you for the quick reply!
I’d love to do some side-by-side test of the 2 with my camera.
What would be a good way about setting up a scene to shoot to test this?
I’m sorry, but I’m no authority when it comes to setting up test scenes. You can always examine your clips side-by-side on the timeline of any NLE that supports PRR if you like. Nevertheless, no matter which camera is used, Y’CbCr is inferior to RGB 4:4:4. Streaming networks also avoid Y’CbCr because it is not suitable for HDR, choosing ICtCp instead. Yet another reason is that, when shooting log, the quality of the file is completely at the mercy of the camera’s internal encoder, meaning that it’s not possible to improve upon this or change the gamma/color space at some point in the future; whereas with raw, the image quality is determined by the debayering process in post, giving you the ability to re-encode the footage with a higher quality encoder when one becomes available. Furthermore, should you ever decide to upgrade to Dolby Vision, you will definitely want 12-bit files, not 10-bit. Lastly, you never want to convert one 10-bit format into another 10-bit format.
Thank you for the insight!
After a lot of headaches trying to get Prores raw files into a format that is compatible with Resolve on a windows system, I can now confirm that side by side, the raw files are indeed much better for HDR grading and have way more information in the highlights
As much as I love the convenience and how well the internal XAVC-I codec on the FX6 is for grading SDR content, I’ll now be filming in Raw for all footage that will be delivered in HDR