Last night Amazon finally confirmed the worst kept secret in tech: That the company has built a phone. Running Android, the phone uniquely is surrounded by four front-facing cameras that track the movement of the head – which is unlike anything we’ve seen before. So could the Fire Phone change how we interact with our phones forever?
Umm, probably not. I think the real impact of the Amazon phone is going to be with FireFly. Here’s why.
Because Amazon has so far shipped a big fat zero Fire phones.
The big question is how well supported will “dynamic perspective” be? Though Amazon has opened it up to developers with an SDK, will Android developers think it worthwhile given the limited number of people who will currently gain a benefit? If you build a game with dynamic perspective as one of the key elements, it can only run on one device… so is never going to be the next Angry Birds.
Sadly for Amazon, what developers look for is compatibility and consistently. Unless you’re the biggest player, there is very little room for outsiders or their ideas, with developers instead choosing to play it safe and build products that will work on as many devices as possible.
It’s why the bulky controller-with-a-screen on Nintendo’s WiiU has lain dormant, used only for map or inventory screens as developers have instead wanted to make games that will work across WiiU, Xbox and Playstation consoles. It’s why nothing has been done with Microsoft’s Kinect camera: why build games that require a peripheral that can’t be ported to the Playstation, or that (following recent announcements) not every player will have?
Another difficulty is the simple one of is it any good? Whilst we’ve no reason to doubt Amazon’s engineering expertise, the real test will be when actual human beings get their hands on the phone to try out in real world situations. This technology is untested – and as with all new technologies, it isn’t going to be great at first.
Finally, there’s the thing that nobody inside Amazon dared ask: What’s the point? Whilst Amazon CEO Jeff Bezos showed off some interesting use cases for the camera tracking in terms of making the user interface a bit swishier, there didn’t appear to be a “killer app” that made it essential. It appears to be a lot of battery and processing power for not much gain. This isn’t a moment like when cameras were first built into phones, or when Apple first decided to build a phone that did away with the keypad entirely and had a touchscreen.
If it were Apple or Google, or even Samsung unveiling the head-tracking cameras, I’d be slightly more confident of the technology bedding down. Presumably those companies are furiously researching ways to match it just in case, but it won’t be until they do that head tracking takes off.
I would instead argue that the major legacy from the Fire Phone will actually be FireFly mode. We’ve seen this sort of thing before, but never quite so fully realised. Essentially, in one button press you can point the camera at a wide range of things and it’ll then offer you (contextual) things related to them. For example, if you point the camera at a book, it’ll look it up on Amazon (of course) and offer you the chance to buy it on Kindle. If you point it at a painting in a gallery it will pull up information on the artist from Wikipedia, and if you point it at an email address on a business card it will read the address and let you add it to your contacts. It listens too – and can identify music and TV shows, and will then throw up IMDB information.
Whilst we’ve seen this stuff before with apps like Google Goggles, barcode scanners and Shazam, never before has such functionality been baked right into the operating system. Crucially too, Amazon has allowed apps to hook into the system in an intelligent way. So if you have MyFitnessPal installed and scan some food, it will tell you how many calories it has, and so on.
This instant-scanning and pseudo-augmented reality information could really change how phones work. Whilst again, it may not be Amazon that takes over the world with it, I’ve every expectation that in the next major Apple or Google keynote, they will show off a new, enhanced Siri/Google Now/etc that will match this functionality and allow third party apps to get in on the action too.
And this could completely change how we use our phones: instead of inputting so much information through the keyboard, we’ll be scanning more, because it will be giving us more information, and by using our apps, information that is more personalised and relevant to us too.
You read it here first.