Linear space compositing
What is it?
Linear compositing (also known as Linear light compositing or Scene-referred linear light compositing) is the process of combining images in linear space, which more accurately simulates how a camera (and the human eye) captures light.
Linear compositing requires HDR video material, either from an HDR source or upsampled from an SDR source (like Rec.709).
Traditional video mixing is done with images and videos that are encoded in video gamma space (Video space - legacy). This means that the pixel values in the images no longer have a 1:1 relationship with the light values they represent, and when we mix them we get unrealistic results. Historically, video gamma evolved as a technique to display an accurate tone curve on a CRT display and to compensate for the hardware transfer function. That constraint has continued despite the disappearance of CRT monitors.
A linear function is much truer to the behaviour of light and vision, where a dynamic range of that image is selected to be displayed on an output device. This is also seen in the photographic approach of “stopping up” or “stopping down” to display different aspects of the captured image.
Comparison of Pixotope's compositing modes
Out of focus
The bright sky on the left accurately bleeds over the darker areas (Left - Linear space | Right - Video space). This is a naturally occurring effect where lighter areas “eat” into the dark regions as the image is defocussed. By contrast the video space defocus shows a growth and blur of the darker regions over the highlights.
Exposing the background
The bright sky on the left accurately bleeds over the fine hair (Left - Linear space | Right - Video space).
The bright lights on the left create accurately exposed light streaks (Left - Linear space | Right - Video space).