The physical dynamics of taking a photograph have barely changed in the last hundred years. When a photographer presses the shutter release, a small light-sensitive square captures a likeness of the scene in view. Yet, the unseen reality is that modern cameras use algorithms and processing power to interpret the light they capture. The image that arrives on your memory card is not the one that entered the lens. This trend is on the rise, and the next step is “computational photography” — the use of advanced software to produce image quality that has never before been seen.

The Current Model of Image Processing

To understand how tomorrow’s cameras will work, it is first necessary to get a basic feel for current image processing.

At present, when you take a photo with your DSLR or smartphone, the captured RAW data goes through a series of refinements. After demosaicing (where the camera recreates the color of the scene), noise and blur reduction are applied, before final tweaks are made to tone, and any HDR processing you require is applied. At the end of all this, you get your JPEG.

The technology is being refined with each new model of camera — hence the constant improvements in image quality. There are limits to the current software, however.

For a start, each of the individual tasks mentioned above is (often) performed by a standalone technology. As a result, the software must re-examine what make of camera and lens it is working with for each task. Limited information is passed from one process to the next for the same reason. The software also suffers from being linear: computer calculations are imperfect, so any errors become compounded with each step.

Considering these drawbacks, current DSLRs produce jaw-dropping detail with minimal defects. But the next generation will be capable of so much more.

Future of Image Processing – Computational Photography

There are numerous small startups working on radical new ways to produce high-quality images, using advanced software. Light hopes to employ multiple sensors to produce one brilliant composite image. Pelican Imaging wants to give phone photographers the opportunity to capture depth. One of the more promising technologies comes from a Montreal-based firm named Algolux.

This team of fifteen is working on CRISP, the “Computationally Reconfigurable Image Signal Platform”. It is designed as a direct replacement for the flawed image processing software currently in use. Every part of CRISP is made by Algolux to work as one, and every task starts with the original RAW data in order to minimize computational imaging errors.

computational-photography-algolux-bird-technology

Before and After photo by Algolux.

The results are impressive. Unlike the softening effect of the noise reduction employed by your current desktop photo editor, CRISP produces crystal clear definition, even in very low light. Daniel Nahmias-Léonard of Algolux sees many future uses for this photography technology.

“We see CRISP as a highly adaptable image processing framework that can be used across industry verticals and applications. It will allow sensor manufacturers, optics designers and application creators to depend on a single approach, get more value from their innovations, and not be hampered by the limitations of traditional image processing methods.”

But Algolux has a couple more tricks up its sleeve.

The first is Virtual Lens technology. This addition to the digital image processing workflow extracts greater detail from an otherwise soft image. The mathematics used for this is, quite simply, bewildering, but the sample images from Algolux show exquisite detail from a smartphone camera with a damaged lens.

In more everyday terms, this means that manufacturers of phones and cameras will be able to use cheaper optics and still maintain good image quality.

One other addition, looking to the long-term, will be Virtual IS. This technology combats camera shake and motion blur without the need for a lens with image stabilization capabilities. It works by measuring movement from a secondary camera — the front-facing camera on your phone, for instance. Once again, you need a Ph.D. to understand the computations required, but the example images posted by Algolux are astounding.

Computational-photography-car-Algolux

Before and After image by Algolux.

Computational-photography-lizard-by-Algolux

Before and After image by Algolux.

Computational-photography-cards-Algolux

Before and After image by Algolux.

When Can I Have It?

At present, the majority of cameras use more traditional methods of image processing, with only a few smartphones breaking the mold. But in the next few years, this is forecast to change dramatically. Industry forecasts suggest that by 2018, more than half of smartphones will use computational photography technology such as CRISP. Current projections show that 1.5 trillion smartphone photos will be taken in 2019 — so if the likes of Canon and Nikon are to compete, surely such software will arrive in DSLRs not long after.

“Computational photography has already changed the way many of us take and interact with pictures — HDR, for example, is now a common feature in smartphones”, says Nahmias-Léonard. “As computing power increases, we will see cameras enable amazing applications for consumer and commercial use.”

Those of us who are excited about this image processing technology must stay patient for the time being — but if Algolux’s work is anything to go by, the wait will be worthwhile.


mark

Post by Mark Myerson, a freelance writer with a love for photography, technology and the environment.

Hat-tip from the author: Fstoppers

All images are provided by and used with the permission of Algolux team.