Why the Amazon Smartphone Might Need 6 Cameras, Part One

Yesterday the neonote ir touch smartphoneWSJ reported a new rumor that Amazon was going to launch a smartphone some time this year, with the most likely launch date falling in June.Few solid hardware details were known about the device, but the WSJ rumor happened to have a detail in common with another rumor from the month before, namely that the smartphone would have 6 cameras.

I'm still not convinced that either rumor is true, even though they agree in a couple details, but rather than simply ignore the rumors I am going to do what I should have done last month: research the tech required for the rumored features, and see if what is currently on the market matches with what the 6 cameras can do.

Update: Here are part two and part three of this series.

Update: Leaked photos show the 4 front-facing cameras in an early Amazon smartphone prototype.

Rather than point out where the rumors are wrong, let's look at some technical details concerning the features that each rumor claims are supported by the 6 cameras.

Let's start with the month-old rumor from the industry analyst.

Ming-Chi Kuo claimed that the Amazon smartphone would have 6 cameras, including "four cameras will be used for gesture control, allowing users to operate the smartphone without touching the touch panel." He said this back in March, and at the time I didn't know much about gesture control. Today I still don't know much, but after spending an evening googling I can tell you that you don't need 4 cameras for gesture recognition.

According to Wikipedia, gesture recognition is:

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.[1] Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.

I found a couple companies which offer gesture recognition equipment based on 3d cameras, which is effectively a pair of cameras operating in sync.  I also found a company that say they can do gesture recognition with a regular camera like the one built into most laptops.

On a related note, the Microsoft Kinect has 2 cameras, one of which is an IR camera. And Intel unveiled RealSense gesture recognition tech back in January; their tech is based on a single camera (or so it appears).

I would say that it is pretty clear that Amazon doesn't strictly need 6 cameras on a smartphone; even if they added gesture recognition they would need at most 3 cameras (2 in the front and one in the back).

And just to be clear, I am expressing doubt about the justification for the cameras, not whether they exist. Perhaps Amazon is going to use them for something really cool. And perhaps Amazon is going to add gesture control, with the 4 extraneous cameras on the 4 sides of the smartphone so they could detect your motions around the smartphone when it is laying on a table.

But even though gesture control could be a useful feature, there might be a cheaper option than adding 4 cameras. For example, it was around this time last year that I found this video on the Neonode website:

Neonode is well known for their touchscreen tech based on IR sensors, and they came up with a way to add a ring of IR sensors around the edge of a smartphone and give it the ability to detect where your fingers are.

I don't know that anyone is using it, but I think it could prove quite useful - more useful than gesture tracking, in fact.

What do you think?

About Nate Hoffelder (11579 Articles)
Nate Hoffelder is the founder and editor of The Digital Reader:"I've been into reading ebooks since forever, but I only got my first ereader in July 2007. Everything quickly spiraled out of control from there. Before I started this blog in January 2010 I covered ebooks, ebook readers, and digital publishing for about 2 years as a part of MobileRead Forums. It's a great community, and being a member is a joy. But I thought I could make something out of how I covered the news for MobileRead, so I started this blog."

9 Comments on Why the Amazon Smartphone Might Need 6 Cameras, Part One

  1. All the better to see you with my dear.

  2. Kinect uses infrared cameras as well as optical cameras.
    Two of each would provide good 3D vision–as long as the software is good enough. Otherwise, they might need more.

    • I don’t think an IR camera is practical for a smartphone.

      The teardown I found shows 2 cameras in the Kinect, one of which is a regular camera. The other is an IR camera, and it requires an emitter (an IR flashlight, basically).

      http://www.ifixit.com/Teardown/Microsoft+Kinect+Teardown/4066

      Wouldn’t an IR emitter be a power drain?

      • You would need multiple emitters (one for each side, though smaller ones) for the Neonode anyway; IR sensors alone wouldn’t do anything. This is probably also why they show it using two fingers held together rather than just one: one finger probably wouldn’t bounce back enough infrared light back to the sensors. I wouldn’t expect it to pick up anything subtle, either.

  3. Something I expect to see in future devices is eye tracking. It would enable “swipe gestures” with the eyes, which has been demontrated to work with a modified Kinect: https://www.kickstarter.com/projects/4tiitoo/nuia-eyecharm-kinect-to-eye-tracking

    I wonder though whether the best place to put tracking cameras would be a variably distant device. A fixed place close to the eyes might provide better accuracy and would also work for eyes behind glasses, if put just there. A disadvantage would be increased energy consumption for necessary wireless communication between camera and the device that would be controlled. Still seems to be the best thing before the brain-computer interface, though.

    • I was going to cover this in part two about the WSJ rumor.

      I found several companies with commercially available hardware for eye tracking. I don’t know that they can be shrunk down to the size of a smartphone, but they handle a wide variety of distances – even greater than what you would expect with a smartphone.

      According to one startup, glasses aren’t a problem, either.

      • So I’ll wait for part two. By the way, I just checked the comments for the Kickstarter project I linked to earlier, and it would appear that the “NUIA eyeCharm” did not actually get delivered. But they had a functional prototype at least, which was presented in a German TV show, where the reporter who tested it found it fascinating. They might have miscalculated their costs or something, didn’t read into the comments any further.

  4. It seems to me we’ve entered a period of development in hand held devices where new features are added because we can, not necessarily because they improve the user experience.

4 Trackbacks & Pingbacks

  1. Why the Amazon Smartphone Doesn't Need 6 Cameras, Part Two - The Digital Reader
  2. Why the Kindle Smartphone Doesn't Need 6 Cameras, Part Three - The Digital Reader
  3. New Details Leak About the Amazon Smartphone - The Digital Reader
  4. Leaked Image Shows the Amazon Smartphone, Doesn't Show the 4 IR Cameras - The Digital Reader

Leave a comment

Your email address will not be published.


*