Follow us on Twitter: @GoAfricaNetwork
The headline feature in Apple’s latest smartphones, theiPhone 6s and 6s Plus, is something called 3D Touch, which lets you activate shortcuts on the phone by pressing a bit harder on the screen. For now, though, I found a less novel, but far handier feature in the new iPhone — one that has long been the butt of jokes but is now becoming a necessary part of modern computing.
You may have heard of it: It’s called Siri, and together with voice-control initiatives from Google, Amazon, Microsoft and several start-ups, it is poised to change the way we think of computers.
As David Pierce wrote recently in Wired, voice recognition and artificial intelligence are getting so good so quickly that it isn’t really a stretch to imagine that talking to computers will soon become one of the signature ways we interact with them. The new Siri is paving the way to what you might call “ambient computing” — a future in which robotic assistants are always on hand to answer questions, take notes, take orders or otherwise function as auxiliary brains to whom you might offload many of your chores.
Picture the “Star Trek” computer, but instead of powering a starship, it’s turning off the basement lights, finding you a good movie on Netflix and, after listening in on a fight between you and your spouse, reminding you to buy flowers the next day. It will be slightly creepy and completely helpful — and it’s coming faster than you think.
There is one key improvement to Siri in the iPhone 6s that suggests these grand possibilities. Rather than having to reach for your phone, you can now activate Siri by yelling at it from a few feet away. “Hey, Siri!” you bellow, and the robotic assistant springs to life. This is not groundbreaking; hands-free voice control has been around in competing smartphones at least since Motorola introduced it in 2013, and several phone makers have adopted it since. Hands-free Siri is also available on older iPhones when the phone is plugged into its charger, because constantly listening for “Hey, Siri” consumes battery power. (The iPhone 6s, however, reduces the drain on the battery through hardware changes.)
But “Hey, Siri” is not the only improvement. In iOS 9, Apple’s new mobile operating system, Siri also has more powers to connect to deeper parts of your phone. It can control devices compatible with Apple’s home-automation system, called HomeKit — you can tell it to turn down the lights, for example. Siri also controls Apple Music, the company’s new streaming service. In the car, say, “Hey, Siri, play Dylan,” and up comes “Subterranean Homesick Blues.”
Then there’s the ubiquity of voice-control devices. Besides the phone, Apple has put Siri in its watch and its coming Apple TV set-top box. Amazon has it on the Echo, a voice-controlled computer that is constantly listening and ready to help you out, and also in its TV streaming devices. Google and Microsoft also have embedded voice in phones, computers and TV devices.
A host of start-ups are entering the game, too. One, calledSoundHound, offers a taste of the possibilities of talking to machines: Rather than going through several sites to make a hotel reservation, you can ask, “Find me a three- or four-star hotel in New York next Friday for less than $300,” and off it goes.
The ubiquity of voice-controlled assistants changes the way we interact with them. When Siri and other voice systems were new, they seemed gimmicky. Nobody quite knew what to do with them, and interactions veered toward the awkward. But the more assistants there are, and the more you use them, the more natural they feel — and that means the more you’ll use them, feeding the cycle.
I’ve felt this happen most impressively with Amazon’s Echo, a machine that one addresses with the keyword “Alexa,” and which I keep in my kitchen, the place I most often need a hands-free device. In my early days with the Echo, I didn’t quite know what to use it for, and when it got something wrong, I tended to penalize the device for its shortcomings.
But the more I stuck with the Echo, the better I understood its capabilities. Now I consult the Echo several times a day for the weather, to set timers, to do quick kitchen math and to play music or audiobooks. It’s become one of the most useful gadgets I own. (And the voice-recognition hardware in the Echo is more powerful than that of the iPhone — Alexa can hear me from far across the room, while the iPhone 6’s “Hey, Siri” stopped working beyond about five feet.)
The coming pervasiveness of voice-controlled machines will not occur without some social anxiety. There will be conventions to work out — is it O.K. to call out “Hey, Siri!” on a bus? Probably not soon, but in time, that may happen; you’ll cringe, and then it could become normal. (The new iPhone tries to learn your voice to prevent other people from activating your device.)
There will be questions of privacy, too. To start up when they hear certain keywords, systems like “Hey Siri” have to constantly listen to their surroundings. Apple says Siri is watching for a pattern, not recording or storing any data.
But you can imagine that actually analyzing all of your speech can’t be far off, because it would make voice assistants more useful. In fact, for years now, Google’s top search engineers have been describing the “Star Trek” computer as their vision of the future of search.
“The ‘Star Trek’ computer is not just a metaphor that we use to explain to others what we’re building,” Amit Singhal, the head of Google’s search team, once told me. “It is the ideal that we’re aiming to build — the ideal version done realistically.”