RED came out of the gate strong with a message of the importance of spatial resolution. We were told that the RED One was an important camera because it “shot 4K,” and 4K is better. A more-is-more argument that I agree with only in part.
In the stills world, the obsession with resolution became the “megapixel race,” and only in the last couple of years has some sanity been brought to that conversation. Canon’s 14.7 megapixel PowerShot G10 was assailed for being a victim of too much superfluous resolution at the expense of the kind of performance that really matters, and Canon backpedaled, succeeding it with the G11 at 10 megapixels.
Why is more not always more? First, there’s the simple matter that there is such a thing as “enough” resolution, although folks are happy to debate just how much that is. But there’s also an issue of physics. Only so much light hits a sensor. If you dice up the surface into smaller receptors, each one gets less light. Higher-resolution sensors have to work harder to make an image because each pixel gets less light. This is why mega megapixels is a particularly disastrous conceit in tiny cameras, and why the original Canon 5D, with it’s full-frame sensor at a modest 12.8 megapixels, made such sumptuous images.
All things being equal, resolution comes at the expense of light sensitivity. Light sensitivity is crucial for achieving the thing most lacking in digital imaging: latitude.
What we lament most about shooting on digital formats is how quickly and harshly they blow out. Film, glorious film, will keep trying to accumulate more and more negative density the more photons you pound into it. This creates a gradual, soft rolloff into highlights that film people call the shoulder. It’s the top of that famous s-curve. You know, the one that film has, and digital don’t.
You know how sometimes you drag a story out when you know you have a good punchline?
When RED started talking about successors to their first camera, it was all about resolution. Who ever said 4K was good enough? We need 5K and beyond! Of course the Epic would be have more resolution. But would it have more latitude?
As the stills world’s megapixel race became the high-ISO race (now that’s something worth fighting for!), so too did the digital cinema world get a dose of sanity in the form of cameras celebrating increased latitude. Arri’s Alexa championed its highlight handling. And RED started swapping its new MX sensor into RED One bodies, touting its improved low-light performance and commensurate highlight handling.
Life was good.
And then Jim Jannard started hinting at some kind of HDR mode for the Epic. HDR, as in High Dynamic Range, as in more latitude.
The first footage they posted seemed to hint at a segmented exposure technique. It looked like the Epic was using two frames two build each final frame, and Jim later corroborated this. The hero exposure, or A Track, would be exposed as normal (let’s just say 1/48 second for 24p at 180º shutter). The X Track would be exposed immediately afterward beforehand (see update below) at a shorter shutter interval. Just how much shorter would determine how many stops of additional latitude you’d gain. So if you want four additional stops, the X track interval would be four stops shorter than 1/48, or 1/768 (11.25º).
The A Track and the X Track are recorded as individual, complete media files (.R3D), so you burn through media twice as fast, and cut your overcrank ability in half. Reasonable enough.
But could this actually work? You’d be merging two different shutter intervals. Two different moments in time (again, see comments). Would there be motion artifacting? Would your eye accept highlights with weird motion blur, or vise versa? Would the cumulative shutter interval (say, 180º plus 11.25º) add up to the dreaded “long shutter look” that strips digital cinema of all cinematicality?
RED’s examples looked amazing. But when the guys at fxguide and fxphd got their hands on an Epic, they decided to put it to the real test. The messy test. The spinning helicopter blades, bumpy roads, hanging upside down by wires test. In New Zealand. For some reason.
Thankfully, they invited me along to help.
But before I’d even landed in Middle Earth, Mike Seymour had teamed up with Jason Wingrove and Tom Gleeson to shoot a little test of HDRx. They called it, just for laughs, The Impossible Shot.
This is not what HDRx was designed to do. It was designed to make highlights nicer. To take one last “curse” off digital cinema acquisition. This is not that. This is “stunt HDRx.”
And it works. Perfectly.
Sure, dig in, get picky. Notice the sharper shutter on the latter half of the shot. Notice the dip in contrast during the transition. The lit signs flickering.
Then notice that there’s not another camera on the planet today that could make this shot.
I guess Mike should really have called it “The Formerly Impossible Shot.”
Read more at fxguide, and stay tuned to fxphd for details on their new courses, coming April 1.
Update
on 2011-04-19 21:35 by Stu
Graeme Natrress confirmed for me that the X track is not sampled out of the A Track interval, but is in fact a seperate, additional exposure. There is no gap between the X and A exposures, but they don’t overlap.
The just-posted first draft of the Red Epic Operation Guide has a few nice deatils about HDRx as well.