In simple terms, Computational Photography is using computer processing to produce a single image usually from multiple source images.
We have been using a simple form of Computational Photography since the advent of even modest digital cameras that have algorithms to brighten, darken, straighten,… images. The first time I knew I was using Computational Photography was when a camera “stitched” together several photos into a single panoramic photo.
What has changed, is that today in 2018 Computational Photography has exploded with methods of “stitching” together different photos of roughly the same scene taken from slightly different angles and/or using different focal lengths (i.e. depth).
Modern Computational Photography can be accomplished with:
- a single camera with two lenses
- multiple pictures from the same camera, perhaps with different lenses
- multiple pictures from different cameras
For example, if you have one image that is taken using a short focal length, objects in the background may be fuzzy. If you have the same image taken using a long focal length, the foreground may be fuzzy. Computer processing, often built right into the camera, can seamlessly put those images together and create a spectacularly clear image in both the foreground and background.
Watch these short videos to understand more about how modern Computational Photography works: