Smart Optics Push Camera Phones Out Of The “Dark” Ages

July 2, 2008
Since the introduction of the camera phone in 2001, having a camera in a cell phone has transitioned from being an added feature to a standard item. Today, more than 80% of cell phones have at least one camera. Camera-enabled phones offer the convenience

Since their introduction in 2001, cell-phone cameras have transitioned from added features to standard items. Today, more than 80% of cell phones have at least one camera. Camera-enabled phones offer the convenience of having a camera that’s permanently on and, quite literally, on-hand for every occasion.

The camera modules used in mobile phones certainly can be considered triumphs of miniaturization. However, they are frequently unable to match the performance of a digital still camera (DSC) of comparable resolution. One of the most noticeable deficiencies of camera phones occurs in low-light environments, where picture quality is seldom acceptable.

The perception of poor low-light performance of camera phones is not solely due to technical factors. The availability of cameras in phones has given rise to a social trend of taking photographs in low-light environments: typically in the evening and in venues like clubs and restaurants where the luminance can be 5 lux or less compared with >350 lux outdoors in daylight. As luminance levels decrease, picture quality from a solid-state imager deteriorates rapidly, resulting in increased noise, loss of detail, and color errors.

Miniaturizing camera modules to fit in ever-slimmer handsets has required a reduction in pixel dimensions, because the height of a camera module is roughly proportional to its sensor area. Pixel size has decreased from 2.2 µm in 2007 down to 1.75 µm in 2008. Pixel size is expected to reach 1.4 µm in 2009, with a roadmap out to 1.1 µm, which has significant implications for image quality.

Simply put, as pixel size shrinks, the ability of the photodiode to absorb photons and release electrons (expressed as quantum efficiency) declines significantly. Consequential effects include smaller dynamic range and degraded signal-to-noise ratio (SNR). Cameras in mobile phones are, therefore, experiencing two negative trends: declining luminance and quantum efficiency.

For use under low-light conditions, many DSCs provide an option that increases the aperture size (i.e., lowering the F-number) to compensate for the reduced number of photons reaching the imager. But, decreasing the F-number also reduces the depth of field, making it difficult to obtain a good quality picture, particularly from objects near the camera.

Typically, standard camera phones use an aperture size from f/3.0 down to f/2.8, mainly to preserve adequate depth of focus. This is set during the manufacturing process, as the height and cost constraint of camera modules for mobile phones does not provide room for a mechanically variable aperture. An easy fix for image capture in low light is simply to increase the exposure time. However, this renders the picture susceptible to motion blur and camera shake, and may not be possible for video capture where exposure time is limited to 67 ms by the frame rate for “day mode” and 150 to 200 ms for “night mode.”

As a result, the design requirements for an improved camera module include low F-number optics to provide increased sensitivity when the luminance is low, with a fast shutter to minimize motion blur and camera shake. These qualities must also include depth of focus to preserve the “point-and-shoot” capability of camera phones demanded by consumers.

Naturally, the solution must be achieved at negligible cost and without increasing the height of the camera module. Because a low F-number and large depth of field are fundamentally conflicting optical requirements, a solution is only made possible by using image-enhancement technologies known as “smart optics.”

Smart Optics
Computational imaging is the process of manipulating images through the application of algorithms, a subset of which corrects for known optical effects of the lens system by image processing. The optical effect can be an intrinsic defect that must be corrected by deliberately introducing a known amount of distortion into the image via a specially designed lens.

In both cases, the image is adjusted with software. In the latter, however, the result is an algorithmically enhanced lens. Algorithmically enhanced lenses are able to function beyond the defined limit of traditional lens design. Using algorithmically enhanced lenses allow for features such as full optical zoom with no moving parts, continuous depth of field (also sometimes described as extended depth of field) and small F-number optics for low-light environments.

The algorithmically enhanced lens solution for continuous depth of field results in all details of a scene being in focus, provided the object being photographed is between 10 cm and infinity from the camera module. It’s accomplished through controlled optical distortion and software and involves no moving parts. It is therefore rugged, reliable, instantaneous, and consumes virtually no power.

In a conventional camera module, the optical train is designed to focus a point source of light (placed a fixed distance from the camera) onto the imager. If the lens is out of focus, or the object is too close to the camera, then the spot is smeared over a diffuse area and the image is blurred.

The rule whereby the point source is transformed by the lens into the blurred spot is described by a mathematical transformation as the point spread function (PSF). This type of blur can be transformed back to a spot using digital signal processing. But, there is no reliable way of identifying whether a particular area is in or out of focus and, therefore, whether the transformation should be applied.

An algorithmically enhanced lens solution fixes this problem by intentionally de-focusing the entire image in a controlled manner, making the PSF less variable with object distance. Effectively, a special lens creates a uniformly blurred image of a point source located anywhere in the field, from near to far, that can be de-convolved by an algorithm. The result is a crisp image in which the foreground, middle-distance, and background are all in focus.

Ultra-Fast Lens
An ideal camera for a handset would have an ultra-fast lens (low F-number) and provide a fully automatic solution that enables clear images under a wide range of luminance conditions. This is possible using algorithmically enhanced lenses.

The basis of the approach is to design the camera module with low F-number optics, i.e. f/1.75. However, such an F-number reduction in a standard camera would result in a greatly diminished depth of field, rendering any object closer than three feet away very blurry.

Incorporating an algorithmically enhanced lens, though, to provide a function akin to the continuous depth of field described above restores the scene depth to a usable range. The low F-number optics mean the camera suits both still photography as well as video feeds under low-light conditions, while preserving a fast shutter speed to help minimize motion blur and camera shake. Signal processing restores the focus depth to the image, compensates for loss of contrast, and substantially reduces noise in the final image, while preserving edges, fine details, and texture. The effectiveness of this solution can be clearly discerned by comparing the two photographs taken with identical imagers (see the figure).

Integration
Smart optics technologies combine a non-standard lens with a custom algorithm to deliver high-quality pictures in a way that is completely transparent to the customer. Integrating an ultra-fast lens solution in a camera module is relatively straightforward.

On the optics side, replacement of the standard lens barrel with a custom design of the same dimensions, which can be manufactured using the existing infrastructure and lens materials, is required. Combined with this is the image-processing algorithm. The algorithms for these image-enhancement solutions are usually small, and when implemented in VLSI hardware, take approximately 100k gates. This is small enough for the algorithms to be embedded in the image pipeline on the CMOS imager. Alternative placements for the algorithm are in software or firmware running on a dedicated image processor, co-processor or the phone baseband processor.

Again, all of these solutions are very simple from a technical standpoint. Their benefits are so compelling that 3-Mpixel camera phones with continuous depth of field are already in production and will be proliferating—together with zoom and ultra-fast lens solutions—in higher-resolution cameras in 2009.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!