We just revised the section on working color spaces in the Monster Guide, so we’re sharing it here.
When it comes to post-processing, using the fewest number of transforms and working at the highest available bit depth in a native color space greater than the display color space are key to preserving the greatest image quality. So, should you be changing your timeline color space to DaVinci Wide Gamut Intermediate or not?
To begin with, presumably you’ve chosen a camera for its particular look and changing the working color space in post will inevitably result in deviating from that look, sometimes subtly, sometimes not so subtly, and often in ways that aren’t always apparent until your favorite creative LUT developed specifically for that camera is applied. What selecting DaVinci Wide Gamut Intermediate or ACES as the working color space is not going to do is magically improve image quality, reveal colors that weren’t already captured at the outset, or make the picture any more cinematic looking. That being the case, why would anyone choose, say, DaVinci Wide Gamut rather than REDWideGamutRGB/Log3G10 for their timeline color space?
If you’re a professional colorist handling footage from a number of different cameras on the same timeline with deliverables targeting everything from social media and streaming networks to broadcast and theatrical release, mapping everything into a unified color space makes perfect sense. It elegantly solves the problem of mismatched color spaces, simplifies VFX round-tripping, enables using the same LUTs across all shots and ensures that NLE controls behave consistently across all of the clips on the timeline. If we were working on a multi-camera shoot however, we prefer Walter Volpatto’s approach:
“Unless you’re doing a documentary or a show that has way too many cameras you have a hero camera: can be an Alexa, can be a Sony, can be a Red, a Blackmagic, a Canon, you name it. You have a hero camera, and usually you have a hero logarithmic color space. So the idea is that that is the color space that I use for all the color corrections. If a shot belonged to a different camera then I do a color space transform to bring that specific shot to the logarithmic.”
So, if you’re dealing with a single-camera project, simply choosing the camera’s own native color space for the timeline and adding an output transform to the display color space as the very last operation is entirely adequate, avoiding unnecessary complexity and image manipulations that could even degrade the image. If anything other than the log of the hero camera is going to be your working color space, it’s essential that you’re taking that into account on set, as it could very well change how you end up lighting your scenes. Do note that if you’re using a plugin like Dehancer or if you want to use any of Cullen Kelly’s LUTs, you’ve got no choice but to select DaVinci Wide Gamut Intermediate as the working color space.
How DPs and colorists deal with ACES
Filmmaker Magazine asked Erik Messerschmidt about his own ACES workflow:
“In our case we have the camera output ACES CC, which is essentially ACES log…Then we apply a transform in the monitor to that ACES log and we transform it into Dolby PQ HDR and Rec 2020. We built a series of LUTs that allowed us to monitor it in PQ and Rec.2020.” The colorist can then focus on creatively graded the footage, knowing it is arriving with a look that represents what was intended during acquisition.
Rand Thompson wrote about the hazards of not incorporating ACES from the outset, using RED IPP2 as an example:
“You could include a Creative RWG/LOG3G10 Lut along with whatever Chosen IPP2 Output Transform you wanted and shoot with that. Then a DIT would probably make CDL corrections that were according to the DP or Director and pass those CDLs along to the Colorist to add to the Post Production Grade for further color grading… Now, however, you would be starting from a Look with different Tonality and Color Rendition that neither the DP, Director and DIT had started out with or agreed to in Production nor what the DIT had made CDL correction for. So now it’s up to the Colorist to make it resemble the look that was established in Production which will be very hard if not impossible to achieve.”
“If you developed a LUT that could Replicate an ACES workflow that would include the IDT (Input Display Transform), the RRT( Reference Rendering Transform) and ODT( Output Display Transform) and which also included the the correct Transform from a Specific Scene-Referred Color Space to the desired Output Display, that would the correct way to go.”
“But basically the purpose of the RED IPP2 Transform is to Transform an REDWideGamutRGB/ LOG3G10 Image to a Desired Output Display. That requires a LUT or other tool that can take a REDWideGamutRGB/ LOG3G10 Input and transform that image to a different Output Color Space in this case REC709/BT1886 Gamma 2.4. So, if I use two ACES TRANSFORMs with one having as it’s INPUT RWG/LOG3G10 and another which has as its OUTPUT REC709/BT1886, it should have all of the components needed along with the required IDT, RRT and ODT to display an ACES workflow properly with a RWG/LOG3G10 Image. You can also use a “LUT BOX” that would go between the Camera and Monitor with a LUT that you could monitor For an ACES Workflow.”
“Now, of course the LUT that would actually be used would be designed by a Color Scientist working on a film or a Talented Colorist whom would use a LUT Creation Program from maybe Light illusion or Lattice or some other High-End LUT creation program, or it could be a Program that was designed from scratch by the Color Scientist or Colorist. This LUT would go through vigorous testing, be 65 Cube sized or even larger and have a Bit Depth of 10 or 12 Bits.”
I feel like your ACES / RED IPP2 explanation is really confusing and i’m not sure what problem you’re trying to fix.
As a DIT I have the cameras output their camera specific format. ARRI Wide Gamut3 Log C3, REDWideGamutRGB/ LOG3G10, S-Gamut3.cine S-Log3 etc. Then match these with camera specific IDT’s. All CDL adjustments are made in the ACES space so they are camera agnostic. Then RRT and ODT to whatever display standard I am working with. The colourist is able to replicate this with 100 percent accuracy. No issues
Thanks for the feedback, Andre.
“I feel like your ACES / RED IPP2 explanation is really confusing and i’m not sure what problem you’re trying to fix.”
It’s perfectly clear: if a filmmaker doesn’t plan for an ACES workflow from the very start, they could run into trouble later on. Naturally, there are many possible workflows.