Two cameras don't really make for a light field camera, where a computer model is built of the captured light rays, allowing them to be projected onto a virtual image capture plane, through a synthetic aperture. That's what Lytro is doing with their plenoptic Cinema Camera (see previous post), and more analogously, what Light is promising with the 16-lensed L16 camera (two posts on that one so far).
Computational Photography is Here (and Has Been for a While actually)
I'm pretty sure that with only two cameras, you can't build a useful light field. But can you do computational photography? That's a trick question, as the iPhone, and many other mobile phone cameras, are already doing computational photography. Already the iPhone will automatically perform an HDR merge of two exposures, for example. But even when the iPhone snaps a single, non-HDR exposure, the amount of post-processing it does is considerable.
We've gotten to test this first hand recently with Apple opening up raw capture to developers. Adobe jumped on this right away with Lightroom Mobile, having already implemented raw in their Android version. The first thing you notice when shooting raw DNG files with your iPhone is how noisy the image are. Turns out Apple's been doing a ton of noise reduction on their photos for a few generations now. It's entirely possible that they are using multiple exposures to aid in this process, but I don't know if anyone's ever confirmed that.
Portrait Mode, Depth Effect
Apple calls their initial two-lens computational photo offering Portrait Mode, and the most recent developer beta of iOS 10.1 includes a beta version of it. Under the right circumstances, this mode enables a so-called "Depth Effect," where both cameras fire simultaneously, and a depth map is built based on the subtle stereo disparity between the captured images. This nine-level depth map is used to drive a variable-radius blur. The result is a photo with simulated shallow depth of field.
This process can never be perfect, but can it be good enough?
Oh hell yes it can.
Why Do We Care?
When I first started testing Portrait Mode, I was alone in my backyard, with only inanimate props. I took some shots where the Depth Effect shined, and some where it flopped. I posted some samples on Instagram, using an unforgiving split-screen effect that dramatically highlights the imperfections of the processing.
Most notably, the processing gives the foreground a bit of a haircut, which you can see clearly in this example.
This stands to reason. The depth map is very likely computed at a reduced resolution, and I bet it’s noisy. Any smoothing is going to also eliminate certain edge details, and Apple's engineers have, I'm surmising, estimated that eating into the edges a bit overall is better than seeing a halo of crisp background between the foreground subject and the blurred background.
The next night, my family came over for a cookout. As we ate and drank into the evening, reveling in global warming, I remembered that I had a new toy to play with. I pulled out my phone, toggled over to Portrait Mode, and snapped a few shots of my brother-in-law and his adorable son.