This morning Horace Dediu, writer at the excellent Asymco blog1, made the following statement:
I recently tweeted that any discussion related to wearable technology needs to begin with a description of the job it would be hired to do. Without a reason for building a product, you are building it simply because you can.
There is truth to this. Often talk of new products seems to ignore the realities of making something that people will find useful. Ten years ago people would try to ground any discussion about a potential product by asking the question “What problem does this new thing solve?”. These days it is more fashionable to ask “What job will the user hire this new thing to do?”. The latter is definitely an improvement. A problem can be a powerful motivator, but many big innovations are more about creating new opportunities than solving an existing problem.
That said, while I agree this is a very useful tool for teasing out the viability of incremental new products, it can be counterproductive as a prerequisite for discussions on a topic like wearable computing. For a well defined smartwatch product, yes, the specific job is important. And to be fair, this kind of detailed analysis is what Asymco does so well. But we are too early in the life of wearable computing to apply the same rigor.
Part of the reason some breakthroughs are made is precisely that the pioneers weren’t constrained by prosaic concerns about the mainstream or commercial utility of what they were working on. If they had been, it might have limited their thinking. Imagine if the Homebrew Computer Club had embraced the rule in Dediu’s statement. Sounds ridiculous, right? That’s because the job something will be hired to do is often the furthest thing from the minds of the people who are breaking new ground. They are doing it because it hasn’t been done before, or because doing it is exciting, or because other people they like or respect are doing it, or even… just because they can.
Ok, that much seems obvious, but I believe it goes further than just the Wozniaks of the world and also applies to the product innovators who drive new product categories to market.
For one thing, people who changed the world with technology, like Steve Jobs or Bill Gates, often didn’t ground their thinking in specific jobs as a first principle. Their intuition that their product had profound value drove them even when they didn’t know exactly what that value was.
“A computer on every desk and in every home” seemed like overreach in 1980. Without mainstream networking and then the Internet, most of the value we currently experience from our computers wasn’t even part of Gates’s thinking. In fact, as late as 1995 he famously overlooked the Internet in his book about the personal computing revolution, “The Road Ahead”. The reason Gates believed so strongly in personal computing wasn’t because of specific jobs. It was the intrinsic potential he saw in a general purpose device that could be programmed by others.
Secondly, when people do talk about the jobs that a new wave of technology will do, they are seldom talking about the actual jobs that people end up hiring the technology to do. Commerce has accelerated the web into the mainstream, but I doubt anyone working on the HTTP protocol or the Mosaic web browser was imagining banner ads or Amazon.com. Arguably the single most important job the smartphone does today is provide a 4″ touchscreen canvas for developers to do almost anything they want, yet Apple didn’t even think that native programming was important when it first launched the iPhone.
Should technologists work on platform level technologies without a specific knowledge of the products they will enable? Yes. In fact, this is where many high risk, high return opportunities lie. It is completely reasonable to approach something like wearable computing from the broad opportunity and work your way in. Does anyone believe that a small brick like the iPhone is as personal as computing will get? No. Do we have ideas for how it might get more personal. YES. It will know more about us (sensors), it will know more about our immediate environment (sensors), it will be easier to carry, more disposable and more convenient (form factor). That’s enough to get started.
Should analysts explore the potential future market for wearable computing without any idea what the products will actually do? Yes. There are many interesting discussions around the size this market might be and the way it might impact other existing product categories. Admittedly most of this analysis will only be useful for things like establishing upper bounds or playing out extreme scenarios, but that’s useful.
What specific jobs will wearable computing do to change the world? We don’t know that yet. What we can do is wave our hands a little. We can say that wearable computing will (1) allow us to take some of the value we see in current computing modalities into new situations and (2) will open whole new areas of opportunity by being more personal and more ubiquitous.
Now, maybe these two things satisfy the requirement for specifying a “job”. But if they do, then some simple practical objections to the “job test” emerge. What qualifies as a job? Who gets to decide the level of specificity that is required? Who makes the judgment on which jobs are legitimate? And would they have given the green light for the web browser based on http://icanhas.cheezburger.com/?
Of course, none of this means that we shouldn’t talk about the jobs we hire wearable computing to do. It can be very productive to ground thinking in ideas about specific opportunities that new products might create, or problems that they might solve. Often, iterating between small, specific ideas and big, vague opportunities is particularly good at accelerating the process.
But a “description of the job” is an inappropriate prerequisite for the discussion.
Although many of us have lived in pre-computing, pre-Internet, pre-mobile worlds, we have forgotten what those worlds were like. Everything that seems so obvious in hindsight, was completely invisible to us then. Mostly even invisible to the people who changed the world. In the future we will look back and think that our pre-wearable computing world was rather quaint, and we will marvel at the ways it has surprised us. Any ability we have to look into that future is compromised by the application of rigorous specificity as a precondition to the discussions that imagine it.