The Benefits Of Shooting 8K

Alister Chapman writes about the advantages of 8K and muses whether The Creator shouldn’t have been shot on the Sony A1 rather than the FX3:

“Since the launch of Burano I’ve become more and more convinced of the benefits of an 8K sensor – even if you only ever intend to deliver in 4K, the extra chroma resolution from actually having 4K of R and B pixels makes a very real difference. Venice 2 really made me much more aware of this and Burano confirms it. Because of this I’ve been shooting a lot more with the Sony A1 (which possibly shares the same sensor as Burano). There is something a really like about the textural quality in the images this camera makes. In addition, when using a very compressed codec such as the XAVC-HS in the A1 recording at a 8K any artefacts tend to be smaller and less visible at 4K. This allows you to grade the material harder than perhaps you can with similar 4K footage. The net result is the 10-bit 8K looks fantastic in a 4K production.”

“I have to wonder if The Creator wouldn’t have been better off being shot with an A1 rather than an FX3. You can’t get 8K raw out of an A1, but the extra resolution makes up for this and it may have been a better fit for the 2x anamorphic lens that they used.”

One argument for choosing RAW over Log for HDR productions has been to avoid the Y’CbCr subsampling present in most Log profiles. Art Adams explained why Y’CbCr encoding is not suited to HDR: 

“The Y’CbCr encoding model is popular because it conceals subsampling artifacts vastly better than does RGB encoding. Sadly, while Y’CbCr works well in Rec 709, it doesn’t work very well for HDR. Because the Y’CbCr values are created from RGB values that have been gamma corrected, the luma and chroma values are not perfectly separate: subsampling causes minor shifts in both. This isn’t noticeable in Rec 709’s smaller color gamut, but it matters quite a lot in a large color gamut. Every process for scaling a wide color gamut image to fit into a smaller color gamut utilizes desaturation, and it’s not possible to desaturate Y’CbCr footage to that extent without seeing unwanted hue shifts. My recommendation: always use RGB 4:4:4 codecs or capture raw when shooting for HDR, and avoid Y’CbCr 4:2:2 codecs. If a codec doesn’t specify that it is “4:4:4” then it uses Y’CbCr encoding, and should be avoided.”

However, if you record 8K Y’CbCr encoded footage and downconvert to 4K, color resolution increases from 4:2:0/4:2:2 to 4:4:4. Panasonic explains how downconverting increases color resolution, using the EVA1 as an example:

“One excellent benefit of downconverting UHD/4K footage to 1080 HD in post is that you can realize an increase in proportional color resolution. The EVA1 is capable of recording 4K or UHD footage using 4:2:2 color sampling, but when employing frame rates faster than 30 frames per second the recording is reduced to 8 bits per pixel and utilizes 4:2:0 color sampling. If you’re going to be delivering in HD, you’ll be downconverting your UHD/4K footage. After downconversion, the 4:2:0 UHD/4K footage will become HD footage with 4:4:4 color sampling. You can convert 3840×2160 8-bit 4:2:0 recorded footage into 1920×1080 4:4:4 footage in post.”

“To understand the color sampling advantage, you’d have to first un derstand that the camera records some of its footage in 4:2:2 and, depending on the resolution and frame rate and codec bitrate, it may record some of its footage in 4:2:0 color sampling. 4:2:0 means (simply put) that there is one color sample for every 2×2 block of pixels. In any given 2×2 block of pixels there are four different “brightness” samples, but they all share one “color” sample. Effectively, within the 3840 x 2160 frame, there is a 1920 x 1080 matrix of color samples, one for every 2×2 block of pixels. During the downconversion to HD, each block of 2×2 brightness samples are converted into one HD pixel, creating a 1920 x 1080 matrix of brightness pixels. This 1920 x 1080 “brightness” (luminance) matrix can be effectively married to the originally-recorded 1920 x 1080 “color” matrix, resulting in one individual and unique color sample for each and every brightness pixel. The result is effectively 4:4:4 color sampling at high-definition resolution.”

Just the same, there are also good arguments for preferring 12 bits over 10 bits. Matthew Bilodeau elaborates:

“… if you’re considering which codecs to use as intermediates for HDR work, especially if you’re planning on an SDR down-grade from these intermediates, 12 bits per channel as a minimum is important. I don’t want to get sidetracked into the math behind it, but just a straight cross conversion from PQ HDR into SDR loses about ½ bit of precision in data scaling, and another ¼ – ½ bit precision in redistributing the values to the gamma 2.4 curve, leaving a little more 1 bit of precision available for readjusting the contrast curve (these are not uniform values). So, to end up with an error-free 10 bit master (say, for UHD broadcast) you need to encode 12 bits of precision into your HDR intermediate.”

Grass Valley, a manufacturer of television production and broadcasting equipment, concurs:

“Converting one 10-bit format into another 10-bit format will reduce the performance of the signal and should be avoided whenever possible.” – HDR: A Guide to High Dynamic Range Operation for Live Broadcast Applications, Grass Valley

Not only that, but decreasing bit depth from 12 to 10 bits also requires a considerably greater 3D-LUT size to obtain the same image quality (fewer interpolation errors).

2 thoughts on “The Benefits Of Shooting 8K

Add yours

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑