I have been experimenting with an iPhone program called SynthCam which is intended to produce interesting focus effects (using synthetic aperture) and reduced noise in low-light images.
It isn’t the easiest program to use, but it does seem like it could produce some interesting effects – particularly when it comes to intentionally limiting depth of field, which is a limitation of the standard iPhone camera.
This article has much more information on the concept behind computational photography and the current state of the art in the field.
Author: Milan
In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. In the fall of 2005, I began reading for an M.Phil in IR at Wadham College, Oxford.
Outside school, I am very interested in photography, writing, and the outdoors. I am writing this blog to keep in touch with friends and family around the world, provide a more personal view of graduate student life in Oxford, and pass on some lessons I've learned here.
View all posts by Milan
Computational photography trials on Flickr
Cisco, which acquired Flip’s maker, Pure Digital Technologies, may have killed the Flip earlier this year for reasons that remain obscure. But a start-up has taken its lessons to heart. Lytro looks to be, if anything, even more disruptive than Flip. The Economist has previously written about the technology behind its mould-breaking camera, which employs tools developed by the nascent field of computational photography to reconstruct the path of light rays, rather than simply capture where they hit the film or photodetectors, thus permitting any part of the image to be focused after snapping it. It is the brainchild of Ren Ng, who originally developed the concept for his doctoral thesis at Stanford University. The company he founded reportedly raised an initial $50m to build a device.
Program allows ordinary digital camera to see round corners
Scientists say computational periscopy program works out hidden scene in under a minute
Earlier this month, Apple’s iPhone team agreed to provide me information, on background, about the camera’s latest upgrades. A staff member explained that, when a user takes a photograph with the newest iPhones, the camera creates as many as nine frames with different levels of exposure. Then a “Deep Fusion” feature, which has existed in some form since 2019, merges the clearest parts of all those frames together, pixel by pixel, forming a single composite image. This process is an extreme version of high-dynamic range, or H.D.R., a technique that previously required some software savvy…. The iPhone camera also analyzes each image semantically, with the help of a graphics-processing unit, which picks out specific elements of a frame — faces, landscapes, skies — and exposes each one differently. On both the 12 Pro and 13 Pro, I’ve found that the image processing makes clouds and contrails stand out with more clarity than the human eye can perceive, creating skies that resemble the supersaturated horizons of an anime film or a video game. Andy Adams, a longtime photo blogger, told me, “H.D.R. is a technique that, like salt, should be applied very judiciously.” Now every photo we take on our iPhones has had the salt applied generously, whether it is needed or not….
https://apple.slashdot.org/story/22/03/20/1823255/apples-iphone-cameras-accused-of-being-too-smart