Not so long ago, tech pundits’ favorite topic was predicting the demise of the PC as we know it. No more desktops, no more laptops – we were going to do everything on our tablets and phones. Now, however, tablet sales are dropping drastically and some folks have begun to forecast that the smartphone is on its last legs, too, soon to be replaced by … what? The theory is that artificial intelligence (AI) will replace the functions for which we rely on our phones today.
Despite high hopes by Apple, Samsung and other device makers, smartwatch sales are not exploding as quickly as predicted, although Gartner still claims the market is set to soar – an analysis that Forbes suggests should be taken “with a firm pinch of salt.” The dream of ubiquitous computing – cloud technology integrated into all of our household devices that work together in one big happy Internet of Things family – is slowly picking up some steam, but has a long way to go. Meanwhile, I and many of the people I know who actually have to produce work in an information-driven world are still sitting (or standing) at a desk in front of an array of monitors powered by a high-spec traditional tower-cased Intel-based system loaded with more RAM, CPU power and storage space than any comparably-priced mobile machine can possibly offer.
What does the future really hold for us? Will present trends toward the consumerization of IT and its inevitable “dumbing down” of operating systems and software eventually mean we will be unable to get our work done – or will it make us more efficient and make things easier for us? That’s what I want to talk about in this look at a society that seems dedicated to creating obsolescence through simplification.
In my lifetime, I’ve seen a rapid evolution in how we process information and communicate it to others. I was lucky enough to get in on the beginning of the computer revolution and watch it transition from mainframes with dumb terminals to standalone personal computers to locally networked systems to Internet-connected ones. Over the past 20 years, I’ve watched the form factor shrink from towers to mini-desktops to laptops to tablets and smartphones. I now carry in my pocket a device that has far more RAM and CPU power than the desktop for which I paid thousands more in the 1990s.
Until recently, the process has been primarily one of expansion rather than replacement. People bought smaller, battery-operated computing devices for travel and easy portability but returned to their more powerful desktop systems to get real work done. Now the capabilities of the mobile devices are catching up and many are ditching the big systems altogether and relying solely on laptops. Many who only use computers for entertainment, socializing and casual information lookup have gone a step farther and have only tablets or “phablets” – large-screened phones. Docking stations for portables and technologies such as Microsoft’s Continuum make it possible to attach those small devices to large monitors and full sized keyboards when necessary, which for many obviates the need for a large system.
But we’ve now reached the point where even the phone is considered by some to be too big and cumbersome to carry around (exacerbated by the popularity of increasingly larger models such as the Galaxy Notes and the iPhone 6 Plus). The smartphone is currently the reigning king of the device market with an estimated 64% of the U.S. population owning one or more in 2016, according to Statista. Despite steady growth over the last five years, though, a survey by Ericsson done as part of 10 Hot Consumer Trends for 2016 indicates that half of those questioned believe staring at the small screen will be a thing of the past by 2021.
I disagree. Certainly artificial intelligence and virtual reality will play a big role in the future of computing, and there’s no doubt that the interfaces we know now will change and morph into something that’s more deeply integrated into our daily lives. However, while the cutting edge of technical innovation moves quickly, adoption of new tech by the masses proceeds at a more leisurely pace.
Let’s be realistic: According to Sophos Naked Security blog and based on stats from Net Applications, as of last month (April 2016) almost eleven percent of desktop computers were still running Windows XP – an operating system that’s been around for fifteen years and has been out of support for two years. If eleven percent sounds small, think about this: it represents millions of users and is more than the number running Mac OS X (all versions combined).
If people can’t be persuaded to give up the interface with which they’re comfortable despite lack of security, a diminishing number of applications available, and the urging of all the experts, are they really going to move to a way of computing that is even more of a drastic change than that of moving from XP to Windows 7/8/10?
Oh, but the new way will be easier and more intuitive, you say (that’s what they said about each subsequent version of Windows, too). We’ll just talk to our computers in natural language and they’ll display results on ubiquitous screens around the house or around town, or better yet, we’ll see them projected on our headsets or smart glasses’ HUDs. It sounds great, but Google Glass was a bit of a flop, VR headsets are great for gamers but not so feasible for wearing to the grocery store or a night out on the town, and even with voice command and control technologies such as Siri and Cortana and Google Now, we see the majority of people still tapping and thumb-typing on their phones rather than talking to them all the time.
Fortune said late last year that “voice assistants like Siri still aren’t cutting it” because they’re seen as a nice-to-have but not essential feature, and a survey taken last year showed that only a small percentage of smartphone owners whose devices support the features were using the voice-based personal assistants on a daily basis. That doesn’t mean this type of interaction doesn’t have its avid fans; Apple said last year that Siri was getting over a billion requests per week, and in January of this year, Microsoft said Cortana had fielded more than 2.5 billion questions since the official launch of Windows 10 on that OS.
So, yes, voice is important – and becoming more so – despite challenges such as the noise it generates in offices, on planes, in theaters and other venues where silent interaction with your device makes more sense. But at present, voice-driven computing is a logical extension of the smartphone, not a replacement for it.
The idea behind the demise of the smartphone seems to be that all those “things” in the chaotic mix that we call the IoT (Internet of Things) will have their own embedded computers and interact with us individually. We’re seeing it happen, to an extent, with such modern conveniences as “smart” cars that we can talk to in order to control the radio, climate settings, and so forth on a hands-free basis. Even there, though, most of these systems are most useful when we connect them to our phones via Bluetooth.
The bottom line is: I believe many of us want the convenience of “device-less” computing – the ability to access our information and share with others without having to carry a little slab of electronics around, even if it is thin and light. I don’t dispute that in five years, we may very well have technology available that provides that, whether we wear it on our wrists, on our eyes or have it embedded under our skin.
I have my doubts, though, that a drastically different disruptive technology will do away with smartphones in that timeframe. It took a decade for many folks to come around to the idea of accessing the Internet on a phone. Now that they’ve embraced that, I don’t think they’ll be quick to give it up for the next “new-fangled” device to come along. I could be wrong, though. Catch me in 2021 and I’ll let you know whether I’ve changed my mind.