Yesterday the WSJ reported a new rumor that Amazon was going to launch a smartphone some time this year, with the most likely launch date falling in June.
Few solid hardware details were known about the device, but the WSJ rumor happened to have a detail in common with another rumor from the month before, namely that the smartphone would have 6 cameras.
I’m still not convinced that either rumor is true, even though they agree in a couple details, but rather than simply ignore the rumors I am going to do what I should have done last month: research the tech required for the rumored features, and see if what is currently on the market matches with what the 6 cameras can do.
Update: Leaked photos show the 4 front-facing cameras in an early Amazon smartphone prototype.
Rather than point out where the rumors are wrong, let’s look at some technical details concerning the features that each rumor claims are supported by the 6 cameras.
Let’s start with the month-old rumor from the industry analyst.
Ming-Chi Kuo claimed that the Amazon smartphone would have 6 cameras, including “four cameras will be used for gesture control, allowing users to operate the smartphone without touching the touch panel.” He said this back in March, and at the time I didn’t know much about gesture control. Today I still don’t know much, but after spending an evening googling I can tell you that you don’t need 4 cameras for gesture recognition.
According to Wikipedia, gesture recognition is:
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.
I found a couple companies which offer gesture recognition equipment based on 3d cameras, which is effectively a pair of cameras operating in sync. I also found a company that say they can do gesture recognition with a regular camera like the one built into most laptops.
On a related note, the Microsoft Kinect has 2 cameras, one of which is an IR camera. And Intel unveiled RealSense gesture recognition tech back in January; their tech is based on a single camera (or so it appears).
I would say that it is pretty clear that Amazon doesn’t strictly need 6 cameras on a smartphone; even if they added gesture recognition they would need at most 3 cameras (2 in the front and one in the back).
And just to be clear, I am expressing doubt about the justification for the cameras, not whether they exist. Perhaps Amazon is going to use them for something really cool. And perhaps Amazon is going to add gesture control, with the 4 extraneous cameras on the 4 sides of the smartphone so they could detect your motions around the smartphone when it is laying on a table.
But even though gesture control could be a useful feature, there might be a cheaper option than adding 4 cameras. For example, it was around this time last year that I found this video on the Neonode website:
Neonode is well known for their touchscreen tech based on IR sensors, and they came up with a way to add a ring of IR sensors around the edge of a smartphone and give it the ability to detect where your fingers are.
I don’t know that anyone is using it, but I think it could prove quite useful – more useful than gesture tracking, in fact.
What do you think?