High Dynamic Range (HDR) has been enjoying a bit of a space race in the digital camera market. What once required careful bracketing and complex stacking software has been largely reduced to an in-camera option, thanks to faster processors and burst-capture capabilities. And while on-the-fly HDR results still have their work cut out for them (as far as matching analog film’s dynamic range), they nevertheless represent a massive improvement over what was possible just a few short years ago.
But dynamic range, color temperature, and noise (within reason) aren’t the only things that can benefit from shoot-now-decide-later flexibility. I predict that the next variable to benefit from post production flexibility will be focus -specifically, depth of field.
There is a lot of progress being made in light field capture. Witness the Lytro: a consumer-friendly device that captures a scene’s entire depth of field in a single shot and lets users decide later where to place the focal point. It’s pretty impressive technology -dare I say the stuff of science fiction- but it still relies on a proprietary post-production step that isn’t (yet) as ubiquitous or standardized as RAW processing. I’m sure they’re working on that too, but in the mean time, I think the advances we’ve seen in smartphone cameras may render the Lytro’s light field method quaint and unnecessarily complex in short order.
Focus bracketing is already a popular method of ensuring that you get the shot, provided your camera can rack focus and bracket quickly enough. This method has also proved valuable for combining several partially-focused images to create one with a greater depth of field. With smartphone cameras already offering best-face and strobe effect shots through simple bursting subroutines, I predict that we will have similar best-focus flexibility very soon. As we gain the ability to capture bigger frames at higher frame rates, it’s only a matter of time before we see a camera that inhales 20 or more shots in the single second (or less) it takes to rack through it’s entire focal range. The entire burst set can then be saved in a stack, providing both the option to select a single focal point, or combine several of them long after the moment has passed.
At the speed burst rates, focus mechanics, and processor horsepower have been improving, I believe we will have this capability within months, not years. And I can’t wait to get my hands on it.
Well, it seems the pixels weren’t even dry on this post before my dreams came true, LOL! It seems Samsung currently offers this feature on the Galaxy Note 4 (and, presumably every smartphone hereafter). There seem to be a few minor glitches left to iron out, but the overall result is pretty impressive. For instance, when the image contains an overt reference to a receding depth of field, like the countertop in the following samples, the algorithm doesn’t quite know how to handle it. However, if there is a clear separation between foreground and background, the application is better at isolating the foreground image without any focus “halo’ing”.
Have a gander. each set of three images was derived from a single press of the shutter “button”: