Why smartphones can be better cameras

It’s true that even with a $2500 equipment, no untrained photographer can shoot an act in motion. Thus, capturing a kid playing football it’s quite a difficult task, meaning that the image could be possibly captured as blurry or fuzzy. However, next to that untrained photographer there’s kid’s dad using his $400 smartphone, shooting better photos of his son. But hey, how is that possible?

The majority of people who buy a DSLR camera, don’t even know the basic photography principles in order to adjust focal length and shutter speed, thus leaving their camera to automatically choose its own preferred settings in P or Auto Mode. Excuse me sir, but if had to spend such a fortune into a camera body and a Canon or Nikon’s len, I would probably use it in the right way — manually — in order to take the full advantage of my expensive DSLR equipment. However, getting off the beaten track of trial and error, the most people don’t even have the patience or  the time required to learn photography fundamentals. In that so, having a DSLR set it up in auto adjustment is like having a Ferrary car locked in your dark and moldy basement.

Comparing a DSLR picture (auto settings) with a Galaxy SIII or iPhone 5, you may find out that there is no differences at first glance. However, if you compare both photos, capturing a fast moving motion picture, the smartphone’s image could be more clear and crisp than your DSLR.

[dropcap]Why ?[/dropcap]

Thanks to optimized software capturing motion pictures, new generation smartphones could use computer vision technologies in order to recognize the ball as a circle model using [highlight color=”yellow”]OpenCV[/highlight]; then apply special filters and auto-adjusting optimal shutter speed, focal length, ISO, exposure time and other parameters in order to capture the best possible frame of the ball. Object detection & recognition would be the next step into smartphones, thus you could use object tracking technologies to observe an event or a person in a particular situation. All this theory is now put in practice thanks to Pulli and a team of researchers at NVIDIA are working on technologies that let more developers take advantage of the powerful application processors being built into smartphones and cameras. FCam, short for ‘frankencamera,’—part of a joint research project with a team led by Marc Levoy at Stanford University’s Computer Graphics Laboratory — is an open-source C++ application programming interface aimed at giving developers precise control over all of a camera’s parameters.

For example, Pulli says, FCam could make it possible for an application developer to create a ‘sport mode,’ for a camera that would automatically focus on a moving object – such as a soccer ball. Or in low-light situations, a camera that can take photos with two different settings: one with a short exposure time, and another with a longer exposure time. One image may have more noise, and the other may be blurry, but the two less-than-perfect images can be combined into a third one that combines the best aspects of both.

Check the full article at the source below.

Leave a Reply

Your email address will not be published.