Satya Nadella’s Email, Decoded

I was very excited when I read Satya Nadella’s recent public email message about his direction for Microsoft. I left the company in early 2010, frustrated with its direction. The email seemed to confirm what I had hoped about his appointment as CEO.

Then I saw Jean-Louis Gassée’s critique of the message and realized that I had read Satya’s words through Microsoft goggles. Having lived the internal corporate process that takes a few strong, simple ideas and makes them into thousands of words of compromised language, I had subconsciously decoded his message.

Here’s the way I interpreted Satya’s words. I’m not saying this is the email he could have or should have written. It is simply the words I heard as opposed to the ones he wrote.

In Apple the world has a company that is about simplicity. A company that in its DNA is about things that are beautiful and easy to use. But it also needs a company that is about productivity.

Microsoft is that company.

Our DNA is about getting things done and making products that help people to get things done. Sometimes those products are less beautiful and more complicated, but they do more. We do more. That’s who we are.

Our mistake in the past was thinking that we needed to shift our focus from businesses to consumers. And that we could make this shift with the same essential strategies as before, and by continuing to bet heavily on our client OS. This approach simultaneously took us away from our strengths and kept our thinking in the past.

Instead, we will do three things.

One, we will unapologetically focus on productivity. Of course this means business productivity, but it also means personal productivity. As people embrace mobile and cloud they will become more demanding and more sophisticated. They will be less afraid of the technology. They will want more. We will give them more.

Two, we will shift our platform priorities from the client OS to the cloud. Today the platform has exploded out of client devices and is in the cloud. That’s where the core of our platform effort will be. It will drive user experiences that are delightful on devices, but also across devices. The Windows and Windows Phone client OSs are important, but they are secondary to the platform we are creating in the cloud.

Three, our user experiences will start where most people use computing today: on their phone. All experiences are important, but mobile experiences are most important. Our thinking will be mobile-first.

We’ll need to make a lot of changes. Changes in our strategy, in our products and in our organization. Change is hard, but this change will be made easier by the fact that we’ll be returning to what we do best. And by the fact that in parts of Microsoft, like our Azure cloud platform, it has been underway for some time. Our changes will not sacrifice a current position of strength. They will take us to a position of greater strength.

Native Versus Web: A Moment In Time

Whenever I see punditry about web versus app, or worse, web versus mobile, I see a graph in my mind’s eye. It is inspired by disruption theory, but I’m not a deep student of Christensen’s work so please don’t read his implicit support into anything I’m writing here.

Skipping to the punchline, here’s that graph.

For those who don’t find it self-explanatory, here’s the TLDR;

Quality of UX Over Time

It starts with a simplistic graph of quality over time for app user experience on computers. Sometime in the early ’80s apps went mainstream in the sense that many consumers were using and purchasing them.

The exact date really isn’t important. The point is that apps exceeded a certain threshold in quality when they became “good enough” for mass adoption. An exact definition of “quality” also isn’t important, but it would encompass all aspects of the user experience in a very broad sense. That is, not only ease of use in the user interface, but also things like installation and interoperation with hardware and other software.

To further state the obvious:

  • This curve is conceptual, not based on accurate measurements of quality.
  • This is an aggregate curve, but curves would be different for different classes of application.
  • “good enough” might not be constant, and is probably rising over time along with our expectations.

The Internet Arrives

At some point in the mid to late ‘90s the web was good enough for a lot of things and started inflating the first tech bubble as people switched to lower cost web apps. By the mid 2000’s, the web had all but destroyed the economy for consumer native apps on Windows. With the exceptions of hard core gaming, productivity software and professional apps, most new experiences were being delivered via web apps. Even in those areas, web based alternatives were starting to challenge the entrenched native app incumbents (e.g. Google Docs versus Microsoft Office).

iPhone and a New Set of Curves

In 2006 Steve Jobs unveiled the iPhone and opened the door to mainstream mobile computing. Initially 3rd party apps could only be web based, but soon there was an SDK and native apps were possible. What followed was a remarkable explosion of innovation and consumption as the iPhone — and then Android — liberated our connected computing experiences from desktop and laptop PCs.

This is where things got interesting. The fundamentally different user experience offered by the opportunities (ubiquity, location) and limitations (size, network quality) of mobile devices meant that we had jumped to a new set of quality curves. And on these new curves we had regressed to a situation where the web wasn’t good enough and native apps were once again dominant.

Today native apps offer a far superior experience in most cases. There are some exceptions, like news, where the advantages of surfing from web page to web page reading hyperlinked articles outweighs the performance benefits of installing all the apps for your favorite news sites. But for the most part the native apps are better. Will this be true forever?

I think that’s a risky bet to make. It’s not a question of if, but rather when web apps will be good enough for most purposes on mobile devices. The words “good enough” are important here. Although my graph has web apps overtaking native apps at some point, maybe they don’t need to. It is possible that good enough, combined with cost savings driven by developing apps without platform imposed restrictions and without revenue share arrangements (e.g Apple’s 30% share of developer revenues), will be sufficient to send the mainstream back to web apps.

The red arrow I have added in this final graph represents the reason I think so many observers draw the wrong conclusions. Web apps went from unquestioned supremacy on PCs to second rate experiences on mobile devices. At the same time, good native apps on mobile devices catalyzed the explosion of the smart phone market. If you only look at a small piece of the elephant you can misinterpret what you see, and in this case people are misinterpreting this moment in time shift from web to mobile, or from web pages to apps, as unidirectional trends.

Web and native are just two different ways of building and distributing apps. They both have advantages and disadvantages. Right now, in the relatively early stages of the phone as computing device, the web isn’t a good enough platform to deliver most mobile experiences. Web apps on phones are jerky and slow and encumbered by browser chrome that was inherited from the desktop. They don’t have good access to local hardware resources. But the trajectory is clear — things are improving rapidly in all these areas.

The Web is Independence

All of the above is predicated on the assumption that a distinction between web and native apps will still make sense in the future. Perhaps the future proof way to look at this is to view “native” as a proxy for experiences that are delivered via a platform owner acting as gatekeeper, and the “web” as a proxy for experiences that are delivered to consumers across platforms, with seamless interconnectivity, and unencumbered by the restrictions put in place by platform owners.

Put differently, the web represents independence from platform owners. It offers incredible freedom to build what you to want build, and to ship when you are ready to ship, without any gatekeepers. While I love my native apps today, I believe in the long term potential of this freedom. Other problems that native app stores solve today — and there are many, like discoverability, security and peace of mind — will be solved on the web in ways that won’t require a platform overlord.

By @xkcdComic, via @duppy:

Taming Tweetstorms

Tweetstorms have been around for a while, but have recently gone meta with a lot of discussion about their problems and merits. Including, of course, tweetstorms about tweetstorms. Some folks have presented solutions for making their creation easier (e.g. Dave Winer’s Little Pork Chop), but the problem is much more about their consumption. Although people often appreciate their content, Tweetstorms are widely regarded as obnoxious and intrusive. The comment that captured it best was one by @kirklove, who likened them to the “loud person at a cocktail party”.

The other problem is that using the reply mechanism to connect the tweets is causing issues with Twitter’s UI for threading replies. And even if that wasn’t the case, the more replies there are to individual tweets in the storm, the more difficult it gets to follow the storm.

Emergent features aren’t always good ideas, but in this case it does seem that Twitter has been presented with an opportunity to break out of the 140 character limit without losing the beauty and simplicity of constraining an individual thought to 140 characters. Tweetstorms are limited, but they can summarize a topic, lay out the logic of an argument or convey a narrative.

From a user experience perspective, the fundamental thing that Tweetstorms long for in Twitter’s model is a way to create relationships between tweets, other than the reply mechanism. Were Twitter to add this to the model, here’s how it might work. I’m going to use one of Fred Wilson’s recent tweetstorms as the example.

I will start with the consumption experience. Here’s what you will see in your timeline when Fred posts his Tweetstorm:

The only way this is different from any other tweet is the “1 of 7″ on the top right. Clicking or tapping that element will expand the storm and you will see the following:

Each tweet now reflects its position in the storm with a number at the top right.  You can collapse the storm from the last tweet. The important thing to note here is that the individual tweets behave exactly the way they do today. For example, you can reply to individual tweets and “View conversation” expands the reply thread for that tweet.

Here’s how Fred would create the storm. First, he writes a single tweet:

That “+” is the only new element, and it is only seen by Fred, the author of the tweet. Clicking or tapping the “+” opens the post form and the new tweet Fred creates becomes the second one in the storm. He then sees the following:

Now both tweets are numbered. Fred can tap the “+” on the second tweet to add a third. He can also another tweet to this storm by tapping the “+” on the first tweet and the new tweet will be inserted between the two existing ones. Deletion of any tweet in the storm, including the first one, just reorders the remaining tweets.

Fred doesn’t have the ability to collapse his own tweetstorm in his timeline.

This is a crude mockup and far from a complete solution. The main point I’m trying to illustrate is that the core of the tweetstorm is a new type of relationship between tweets, and the challenge is to make this relationship easy to create and (more importantly) consume. I think the goals of any solution should include these two:

  1. Lower the impact of the storm on the consumption experience. I’m doing that above by collapsing the storm by default. As a consumer, it is my decision to allow the storm to take over my timeline.
  2. Minimize the impact of the solution on the current mechanics of individual tweets. Aside from being collapsed under the first tweet by default, every tweet in the storm above is a first class tweetizen.

Just in case I have given the impression that adding a new type of relationship between tweets will be relatively easy to do — it won’t. To a system operating at massive scale like Twitter does, even small changes have far reaching consequences. And isn’t a small change. Not to mention all the edge cases and unintended consequences in the user experience that need to be considered. For example, one key challenge will be to resolve confusion between tweet storms and reply threads (this is a weakness in my mockup above).

Finally, apologies to the Twitter design team for brutalizing their UI with this mockup. I love Twitter as a product and there is no disrespect intended.

The Future Of Communication

The Myth of Customer Acquisition

How did you “acquire” that customer? Did they make one or more purchases? Did they sign up to your service? Did they give you their email address?

Sorry, but that isn’t worth a lot. They will buy the next thing somewhere else. They are already signing up to another service. If your email doesn’t automatically get sent to the junk folder, they will ignore it.

You don’t acquire a customer. You get access to their attention for a short period of time. You compete with many other things for each of these periods. The number of things competing for each period is getting larger all the time. The number of things doing exactly what you do, or selling exactly what you sell, is probably increasing too. And the periods themselves are getting shorter.

All is not lost. There are two things you can do.

First, you can compete for one quantum of attention. One visit from search, via an article that mentions you, from a social media share, or your viral video. Or much more valuable, via word of mouth. When you win that elusive quantum of attention, be sure to use it well. Present a way to solve a pressing problem. Be useful. Elicit joy.

Second, you can build trust. Every quantum of attention you win is an opportunity to play the long game. To build, with each small contact, an enduring relationship. The more trust there is, the more likely it is that you will get future attention. That your product will be heard above the cacophony. That your email will be read. That you will get more chances to transact, and to earn even more trust.

There are no “new” customers you can acquire, and you don’t have any existing customers to retain. There are people that trust you and people that don’t. You have to compete to win their attention either way. So throw away your models of customer acquisition and retention. Compete like crazy for each quantum of attention. And focus on using every one that you win to fulfill needs and build trust.

Feedly: Biting the Hand that Feeds

With the demise of Google Reader many publishers recommended to readers that they check out Feedly. Now they might be regretting that decision. Feedly seems to have taken a turn that is decidedly unfriendly to publishers. This started with them pointing URL shares to their own copy of the publisher’s content, instead of the publishers original content. Now they appear to be stripping publishers’ clickthrough tracking tags and replacing them with their own.

Feed Wrangler’s David Smith, a Feedly competitor, has spoken out against these practices and others that might be good for the feed reader service or provide a better experience to the end user, but are harmful to publishers.

The whole thing is worth reading, but here’s a little excerpt I found particularly interesting:

I am not, nor should I be, in the business of making unilateral changes to the chosen business models of content publishers. As a reader I love it when publishers provide full-text, unencumbered feeds for all of their articles. I understand, however, that not all publications choose this route. When they don’t, I believe my responsibility as a reading service is to respect that choice.

This statement not only sets Feed Wrangler apart from Feedly’s questionable practices, it actually sets them apart from read-it-later services like Instapaper and Pocket. RIL apps are definitely in the business of (as David puts it) “inserting [themselves] into the experience the writer had decided to present to their audience”. They are without a doubt improving the reading experience in many cases, but arguably doing so at the “expense of the publisher’s relationship to their readers”.

There is a continuum of businesses that are built on top of the work done by writers and publishers. Instapaper is not completely benign, but is definitely closer to that end of the continuum than the increasingly malignant Feedly. For publishers the question is what to do about the continuum as a whole. As I’ve said in the past, I believe the answer — easier said than done, I know — is for publishers to provide an experience so compelling that readers want to consume their content in situ and not on someone else’s platform.

The question is whether publishers can find an economic model to support this. Ads, at least in the form of their current race to increasing distraction and intrusiveness, don’t seem to be it.

Judge, Jury and Social Media Executioner

It seems more and more often that we see people shamed on social media. They do something greedy, stupid, or bigoted, or a combination of all three, and it comes to the attention of someone who’s internet famous. Tweets go out, virtual mobs assemble, and pretty soon there are real world consequences. Firings, defacement of property, physical violence and death threats are not uncommon.

Of course there are also the apologists who step in to play devil’s advocate, defend the offender, or just straight up troll the mob and its leaders. Sometimes the retaliation is more violence directed back towards the shamers and the offender’s original victim.

And then there are people who step in to point out that mob justice doesn’t seem right, no matter how egregious the original offense. But by this time the issue is too polarized for a nuanced point of view and these people are lumped in with the apologists. Complicating matters is that they are almost indistinguishable from some of the more sophisticated apologist trolls.

Absent the heat of the moment, I think most reasonable people would acknowledge that this isn’t a good way to bring justice to any situation. It looks way too much like a vigilante lynch mob. And even if the offender is guilty in most of the cases, we all know that it’s only a matter of time before some innocent is scarred for life, their personal and professional reputations destroyed in a way that no number of retractions can repair.

How many true offenders must be brought to justice in order to justify the cost of one false positive?

This is an old problem — addressed in the offline world by a complex architecture of civil society and legal systems — but set in a new borderless, online world that has little regulation. My intuition recoils at the idea of government regulating behavior on the social web, but given human nature and the capacity for groups to do damage to individuals, its hard to imagine a future where this isn’t happening.

In the meantime we should take a more critical look at online, crowdsourced justice. In particular, the influential, apparent do-gooders who are in many cases stirring the mobs to action. Most of them probably have good intentions, but some seem to enjoy the role too much. They ride the powerful wave of online emotion to even greater influence. Rinse. Repeat.

We should exercise more restraint before seeking justice through social media. Is there a way of solving the problem by speaking to the offender directly? Is it possible that our tweet could lead to a disproportionate response? Is the individual in our sights just the unlucky drop over the full bucket, or are they truly responsible for the full extent to which we were wronged?

I’m sure that a lot of people who take their conflicts to Twitter or Facebook get no response at all. But some get a response far beyond their expectation or desire. In addition to the pain caused by the original offense, they may have to live with the guilt of a disproportionate response that ruined someone’s life.

Apple’s M7: Filling the Gaps for Indoor Location Estimation

I’ve seen and heard a lot of talk about Apple’s new M7 motion co-processor and iBeacon feature after the iPhone announcement earlier this week, most of it pointing out that these features lay some of the foundation for indoor mapping. But I haven’t seen anything explaining exactly why they have this significance. It has been several years since I worked in this space, but here is some speculation of my own.

Your Are Here

First, some background. GPS doesn’t work indoors. Even if it did, it wouldn’t be sufficiently accurate for useful indoor mapping. Cellular tower triangulation does work indoors of course, but is much less accurate than GPS.

Another option is to triangulate position using data about Wi-Fi hotsposts in the vicinity. The iPhone has had this capability for some time, initially using Skyhook’s data, but switching to Apple’s own data in 2010. Apple also recently acquired WifiSLAM, which had “developed ways for mobile apps to detect a phone user’s location in a building using Wi-Fi signals”.

Then there is iBeacon, Apple’s support for Bluetooth Low Energy (BLE). This enables BLE devices in the vicinity of an iPhone to wake them up and give them some information. That information could be location, so you could imagine these little BLE devices scattered around the mall telling nearby iPhones “Hey! You are here.”

Or Are You?

These are important steps forward for indoor mapping, but probably not the whole solution. Even in a future where there is a lot of Wi-Fi data on hand and iBeacons installed all over the place, the system would be largely outside Apple’s control and probably unreliable.

Add to the challenge that we’ll be hard to please, with high expectations for accuracy and reliability for indoor location. We think at a higher resolution indoors — important differences are measured not in city blocks, but in yards or even feet. Small mistakes and inaccuracies will be frustrating, even if they only happen once in a while.

Enter motion sensors and the M7.

Filling the Gaps

If you have one good position measurement you can use motion sensors (accelerometer, gyro and compass) to measure your subsequent movements and keep track of your position. This can’t be the whole solution for two reasons:

  1. A lot rests on the accuracy of the initial position measurement. The last GPS measurement before losing the satellite is probably not good enough.
  2. Even if you can get a good initial measurement, every subsequent adjustment based on motion will have a tiny estimation error. These errors will accumulate over time and at some point the estimated position will be too inaccurate to be useful.

But it is conceivable that iBeacon- and Wi-Fi-based positioning will eliminate these problems for many indoor areas.

There is one remaining, huge, deal-breaking challenge for the approach of using motion-based positioning to fill in the gaps: It requires the motion hardware to be continuously measuring and processing your movements. Pre-5S, this would have drained the battery in short order. The M7 co-processor that Apple announced on Tuesday solves this problem.

Leading the Way

None of these technologies alone will make reliable indoor location possible, but together they might. It will probably take a while before we see indoor mapping in a keynote. There is lots of infrastructure involved that isn’t under Apple’s control (deploying all those beacons, for example). And we have to wait until the majority of iPhones in market have M7 and iBeacon support.

Also, Apple needs its competitors to support BLE with iBeacon-like features, because malls and airports and sports stadiums probably won’t be installing thousands of beacons for iPhones alone. So the ability to listen to beacons isn’t an opportunity to differentiate the iPhone.

Where they can differentiate, though, is in the gaps.

I think on Tuesday Apple revealed a key part of their strategy. Most of the subsequent talk about the M7 is focused on fitness oriented scenarios, and doing clever things when your phone detects certain activities (my car stopped, I sat down, etc.). Those might lead to nice apps and snazzy features, but I’m betting that Apple is interested in something much more fundamental: leading in the next frontier for mapping and location — indoors.

Faith in The Job Unknown

This morning Horace Dediu, writer at the excellent Asymco blog1, made the following statement:

I recently tweeted that any discussion related to wearable technology needs to begin with a description of the job it would be hired to do. Without a reason for building a product, you are building it simply because you can.

There is truth to this. Often talk of new products seems to ignore the realities of making something that people will find useful. Ten years ago people would try to ground any discussion about a potential product by asking the question “What problem does this new thing solve?”. These days it is more fashionable to ask “What job will the user hire this new thing to do?”. The latter is definitely an improvement. A problem can be a powerful motivator, but many big innovations are more about creating new opportunities than solving an existing problem.

That said, while I agree this is a very useful tool for teasing out the viability of incremental new products, it can be counterproductive as a prerequisite for discussions on a topic like wearable computing. For a well defined smartwatch product, yes, the specific job is important. And to be fair, this kind of detailed analysis is what Asymco does so well. But we are too early in the life of wearable computing to apply the same rigor.

Part of the reason some breakthroughs are made is precisely that the pioneers weren’t constrained by prosaic concerns about the mainstream or commercial utility of what they were working on. If they had been, it might have limited their thinking. Imagine if the Homebrew Computer Club had embraced the rule in Dediu’s statement. Sounds ridiculous, right? That’s because the job something will be hired to do is often the furthest thing from the minds of the people who are breaking new ground. They are doing it because it hasn’t been done before, or because doing it is exciting, or because other people they like or respect are doing it, or even… just because they can.

Ok, that much seems obvious, but I believe it goes further than just the Wozniaks of the world and also applies to the product innovators who drive new product categories to market.

For one thing, people who changed the world with technology, like Steve Jobs or Bill Gates, often didn’t ground their thinking in specific jobs as a first principle. Their intuition that their product had profound value drove them even when they didn’t know exactly what that value was.

A computer on every desk and in every home” seemed like overreach in 1980. Without mainstream networking and then the Internet, most of the value we currently experience from our computers wasn’t even part of Gates’s thinking. In fact, as late as 1995 he famously overlooked the Internet in his book about the personal computing revolution, “The Road Ahead”. The reason Gates believed so strongly in personal computing wasn’t because of specific jobs. It was the intrinsic potential he saw in a general purpose device that could be programmed by others.

Secondly, when people do talk about the jobs that a new wave of technology will do, they are seldom talking about the actual jobs that people end up hiring the technology to do. Commerce has accelerated the web into the mainstream, but I doubt anyone working on the HTTP protocol or the Mosaic web browser was imagining banner ads or Arguably the single most important job the smartphone does today is provide a 4″ touchscreen canvas for developers to do almost anything they want, yet Apple didn’t even think that native programming was important when it first launched the iPhone.


Should technologists work on platform level technologies without a specific knowledge of the products they will enable? Yes. In fact, this is where many high risk, high return opportunities lie. It is completely reasonable to approach something like wearable computing from the broad opportunity and work your way in. Does anyone believe that a small brick like the iPhone is as personal as computing will get? No. Do we have ideas for how it might get more personal. YES. It will know more about us (sensors), it will know more about our immediate environment (sensors), it will be easier to carry, more disposable and more convenient (form factor). That’s enough to get started.

Should analysts explore the potential future market for wearable computing without any idea what the products will actually do? Yes. There are many interesting discussions around the size this market might be and the way it might impact other existing product categories. Admittedly most of this analysis will only be useful for things like establishing upper bounds or playing out extreme scenarios, but that’s useful.

What specific jobs will wearable computing do to change the world? We don’t know that yet. What we can do is wave our hands a little. We can say that wearable computing will (1) allow us to take some of the value we see in current computing modalities into new situations and (2) will open whole new areas of opportunity by being more personal and more ubiquitous.

Now, maybe these two things satisfy the requirement for specifying a “job”. But if they do, then some simple practical objections to the “job test” emerge. What qualifies as a job? Who gets to decide the level of specificity that is required? Who makes the judgment on which jobs are legitimate? And would they have given the green light for the web browser based on

Of course, none of this means that we shouldn’t talk about the jobs we hire wearable computing to do. It can be very productive to ground thinking in ideas about specific opportunities that new products might create, or problems that they might solve. Often, iterating between small, specific ideas and big, vague opportunities is particularly good at accelerating the process.

But a “description of the job” is an inappropriate prerequisite for the discussion.

Although many of us have lived in pre-computing, pre-Internet, pre-mobile worlds, we have forgotten what those worlds were like. Everything that seems so obvious in hindsight, was completely invisible to us then. Mostly even invisible to the people who changed the world. In the future we will look back and think that our pre-wearable computing world was rather quaint, and we will marvel at the ways it has surprised us. Any ability we have to look into that future is compromised by the application of rigorous specificity as a precondition to the discussions that imagine it.

    1. Asymco is my answer to the question “If I only read one blog on mobile computing business and strategy, what should it be?”


iWatch is a Proxy for Wearable Computing

I don’t know anything about Apple’s plans regarding iWatch. From my perspective the interesting questions aren’t around what Apple will do, and when they will do it, but rather around the opportunity for products in a new computing modality that is more available and more personal than the phone.

That there is an opportunity is quite obvious because computing is such a long way from being available in all situations. Consider…

Carry Options

Before smartphones (in the mainstream this means “before iPhone”) you needed a bag in order to take your computing with you. Now that we have smartphones, we only need a pocket.

But what about situations where we don’t have a suitable pocket or bag? On the beach in a speedo or clubbing in a little black dress.

How much functionality would you forgo if you could have a device that took computing from 95% of your daily situations to 99% of those situations?


If you destroy your iPhone and you aren’t paying for insurance, then it will cost you hundreds of dollars for a pre-owned replacement. That’s a lot of money to the overwhelming majority of people, who will think twice before taking their indispensable device boating, cycling or bar hopping. Or taking it somewhere where it might be stolen.

Many people won’t think about this much until they destroy their first phone, and then after that they will think about it every time they use it on an escalator.

How much functionality would you forgo for a device that was disposable, in the sense that if it was lost or destroyed it would be no more inconvenient or costly than losing a disposable camera?


Have you ever found yourself using your iPhone when your iPad is less than 10 feet away? Or using your iPad when your MacBook Pro is in the next room?

Yes, we’re much lazier than we might have imagined. But the more generous, and probably more accurate explanation is that the value of immediate access to computing without interruption outweighs the need to see something in higher fidelity on a larger screen.

When there is a computing modality more ubiquitous and more immediately accessible that the phone, the iPhone will find itself on the other side of the trade off.

For many people telling time is NOT sufficient value to outweigh the inconvenience of reaching for your phone, as evidenced by the trend to lose the watch and use one’s phone to tell time. But this is not evidence that the watch form factor is a loser, it is evidence that time alone isn’t enough functionality to justify wearing one based on utility alone.

The hard, but necessary thing to do when thinking about iWatch is to forget about watches as we know them. For example, forget about telling time. Whatever this new thing is, it will give us the time. But that will represent a negligible fraction of the overall value that the device offers. Similar to the way the voice calling phone app only represents a small fraction of what the iPhone offers, but much more extreme than that.

Also, don’t assume that an iWatch is necessarily a wristwatch. That might be the best form factor, but it might not be.

Rather think about it in terms of taking the value of connected computing — all the value one can get from using apps and websites while connected to the Internet — and making that value available in the life situations where it currently isn’t available, and more conveniently available in the ones where it is.

How can iWatch be more wearable, more disposable, and more accessible than a smartphone? What smartphone functionality will we forgo for this increased availability?

Then, given a modality that is more personal and more ubiquitous than even the smartphone, what new opportunities will this open up? The early smartphones took traditional computing functionality and jammed it on the phone, but the greatest smartphone innovations are all about the new opportunities of immediacy and location that were created when the computer became something that was with you most of the time.

What new opportunities will emerge when the computer is with you ALL of the time, and is perhaps even physically connected to your body?

From this perspective Google Glass should be seen as one of Google’s forays into the space that an iWatch will enter. It is probably too delicate, too expensive and a little too creepy to be the mainstream success story in this space, but it is an interesting experiment nonetheless.

Microsoft’s SPOT Watch (later Smart Watch) was also interesting. At least a decade too early and a marketing impossibility, but the SPOT UI was excellent and the watches weren’t bad either. I was a watch snob back then and I was happy to wear a Suunto smart watch.

Recently we started to see watch-type devices that pair with smartphones. These are interesting, but the configuration complexity of needing two devices to get one thing done isn’t great. And of course, if you need your phone too then you’re not really getting any carry or disposability benefits.

The point of all this is twofold…

First, predicting that Apple will at some point do an “iWatch” of some sort is equivalent to predicting that they will still be in business when these new modalities roll around. Those making the predictions will be wrong until they are right. Just like iPhone.

Second, don’t think of this new thing as a watch. They might call it a watch, because “watch” is a word we know, one that conveys a lot of information (small, technology, personal), and one that doesn’t scare us with newness. Put differently, use of the word “watch” would be a marketing decision. Arguably a very smart one.

But this new thing represents something much more profound. It is a continuation of the trend to connect us more deeply to technology, and to each other via technology, with all the opportunity, wonder and potential for abuse that goes along with that.