Media Nerdvana Achieved (For Now…)

One format to rule them all,
One archive to protect them,
One player to stream them all,
and on my TV project them.

It seems like a fairly simple proposition, doesn’t it?  A common file type to contain video and audio assets without proprietary licensing restrictions or DRM “protections”; A scalable, fault-tolerant storage solution with full redundancy to hold them all; and a means to stream them in all their digital glory to a home theater screen.  It would also be nice if it didn’t break the bank :P  Well, I’ve been poking around at this piecemeal for a couple of years now, and I think I’ve finally found a solution that satisfies 99% of the ancient Elvish proverb.

For the impatient, here’s the skinny, in the above order:

  • MP4 container – 1080p h.264 video with AAC Stereo (Dolby Pro Logic II) and AC3 (Dolby Digital 5.1 surround) passthrough audio
  • Synology Hybrid RAID Network Attached Storage (NAS)
  • Kodi (XBMC)
  • Google Nexus Player

Let’s break it down, shall we?

One Format to Rule them All – MP4

In a perfect world, I’d happily stick with Matroska (MKV) containers.  You can put everything including the kitchen sink into an MKV file, and it will play.  Dolby Digital 5.1, DTS, True HD 7.1, AAC, MP3, MJPEG, h.264, subtitles… you name it.  If you can encode it, MKV can house it, and the best part: no corporation owns it; it’s open source and utterly free.  So why would I settle for MP4?  In a word, Apple.  The folks in Cupertino, being wedded to so many content providers, aren’t too cozy with open-source formats that let users circumvent their iTunes (near) monopoly.  In fairness to the Mac fanboys (among whom I do still occasionally count myself), Google, Amazon, and Samsung play the same game with their content, but I digress.  For now, I still like the richness of embedded metadata possible in the MP4 container, and the way iTunes exploits it.  So as long as I’m managing my media with iTunes (one notable exception, below), it pays to make it all iTunes -and iOS- friendly.  The good news is that my home theater solution works with MP4s as easily as MKVs.  So, until Apple throws one of their elitist tantrums and does something stupid, like drop support for Dolby Digital (AC3), I’ll stick with it.

Truthfully, I’m about at my wit’s end with Apple and their inexplicable lack of leadership in the latest media standards (like 4K video, True HD audio, and high resolution mobile device displays).  As of this writing, I don’t own any iOS devices anymore, and I’m running out of reasons to keep dumbing down my media experience to accommodate them.  Probably the only thing keeping me from kicking MP4 to the curb is the Herculean transcoding/re-muxing chore that will entail.  But don’t think I won’t do it eventually, Messrs Ive and Cooke.  You’re on thin ice.

The Next Big Thing in Photography

High Dynamic Range (HDR) has been enjoying a bit of a space race in the digital camera market.  What once required careful bracketing and complex stacking software has been largely reduced to an in-camera option, thanks to faster processors and burst-capture capabilities.  And while on-the-fly HDR results still have their work cut out for them (as far as matching analog film’s dynamic range), they nevertheless represent a massive improvement over what was possible just a few short years ago.

But dynamic range, color temperature, and noise (within reason) aren’t the only things that can benefit from shoot-now-decide-later flexibility.  I predict that the next variable to benefit from post production flexibility will be focus -specifically, depth of field.

There is a lot of progress being made in light field capture.  Witness the Lytro: a consumer-friendly device that captures a scene’s entire depth of field in a single shot and lets users decide later where to place the focal point.  It’s pretty impressive technology -dare I say the stuff of science fiction- but it still relies on a proprietary post-production step that isn’t (yet) as ubiquitous or standardized as RAW processing.  I’m sure they’re working on that too, but in the mean time, I think the advances we’ve seen in smartphone cameras may render the Lytro’s light field method quaint and unnecessarily complex in short order.

Focus bracketing is already a popular method of ensuring that you get the shot, provided your camera can rack focus and bracket quickly enough.  This method has also proved valuable for combining several partially-focused images to create one with a greater depth of field.  With smartphone cameras already offering best-face and strobe effect shots through simple bursting subroutines, I predict that we will have similar best-focus flexibility very soon.  As we gain the ability to capture bigger frames at higher frame rates, it’s only a matter of time before we see a camera that inhales 20 or more shots in the single second (or less) it takes to rack through it’s entire focal range.  The entire burst set can then be saved in a stack, providing both the option to select a single focal point, or combine several of them long after the moment has passed.

At the speed burst rates, focus mechanics, and processor horsepower have been improving, I believe we will have this capability within months, not years.  And I can’t wait to get my hands on it.


12/12/2014 UPDATE:

Well, it seems the pixels weren’t even dry on this post before my dreams came true, LOL!  It seems Samsung currently offers this feature on the Galaxy Note 4 (and, presumably every smartphone hereafter).  There seem to be a few minor glitches left to iron out, but the overall result is pretty impressive.  For instance, when the image contains an overt reference to a receding depth of field, like the countertop in the following samples, the algorithm doesn’t quite know how to handle it.  However, if there is a clear separation between foreground and background, the application is better at isolating the foreground image without any focus “halo’ing”.

Have a gander.  each set of three images was derived from a single press of the shutter “button”:

NearFarAnd everything in between!Nearer my Dog to TheeLOOK OUT!  It's a Gila Monster!Full DOF... there's that focus halo I was talking about.

 

Red Dot Finder

wpid-storageextSdCardDCIMCamera20140330_173837_RichtoneHDR.jpg.jpgwpid-storageextSdCardDCIMCamera20140330_173915_RichtoneHDR.jpg.jpgwpid-20140330_173459.jpgwpid-storageextSdCardDCIMCamera20140330_173527_RichtoneHDR.jpg.jpgwpid-storageextSdCardDCIMCamera2014-03-30-17.43.33.jpg.jpgwpid-20140330_173422.jpg

So, in case you don’t already know, the Orion EON 110mm ED refractor does not come with any extras.  Unless you count the front element cover, ring clamps, and the case (and I don’t), you will need to budget appropriately for your own dovetail rails, finder scope, diagonal, and eyepieces.

For my money, nothing beats the simplicity and minuscule profile of a red dot finder, but getting one attached to a telescope is never a straightforward affair.  I’ve already come to the conclusion that -notwithstanding Orion’s impressive design and construction efforts- nothing in astronomy is ever configured to do what you want right out of the box.  It seems no matter how many threaded holes Orion thoughtfully included in the stock configuration, there just aren’t enough, or they are in the wrong location to do what I want.  So it’s time to break out the tap and die set.

What you see here is the end result of three weeks of brainstorming and trial & error.  At the end of the day, all the dovetail and quick-disconnect pipe dreams ended up in the parts bag, and I settled on a simple, chamfered aluminum mounting plate, screwed to the outside of the ring clamp closest to the focuser.  Rock solid, low-profile, and unobtrusive.