Imagine an always-recording 360 degree HD wearable networked video camera. Google Glass is merely an ungainly first step towards this. With a constant feed of all that she might see, the photographer is freed from instant reaction to the Decisive Moment, and then only faced with the Decisive Area to be in, and perhaps the Decisive Angle with which to view it. Already we've arrived at the Continuous Moment, but only an early, primitive version.
Evolve this further into a networked grid of such cameras, and the photographer is freed from these constraints as well, and is then truly a curator of reality after the fact. "Live” input, if any at all, would consist of a “flag” button the photographer presses when she thinks a moment stands out, much like is already used in recording ultra-high-speed footage. DARPA has already developed acamera drone that can stay aloft recording at 1.8 gigapixel resolution for weeks at a time, covering a field as large as 5 miles wide, down to as small as six inches across, and it can archive 70 hours of footage for review. This feat wasn't achieved with any new expensive sensor breakthroughs, but rather by networking hundreds of cheap off-the-shelf sensors, just like you've got in your smartphone.