Programming: The Uncareer

There has been a lot of introspection by programmers recently. One fascinating thread was kicked off by Ed Finkler, who wrote “The Developer’s Dystopian Future“. This resonated with Marco Arment and Matt Gemmel followed up both pieces with “Confessions of An Ex-Developer”. All three are accomplished developers and successful writers.

As a (cough) veteran programmer there are many feelings expressed by Ed, Marco and Matt that I recognize in myself. In particular, the fear of being left behind. I’ve been lucky enough to work on a wide variety of programming projects over the years. And whether it was specialized DSP engines for high speed image processing, simulations of neutron physics on UNIX workstations, apps on PCs, or experiences on the web, I never reached the point where I felt like I had completely mastered the science or the tools. There was always something new to learn, or someone else, somewhere in the world, who knew more than I did. As soon as I was getting close to mastery, the goalposts moved to some new technology or tool.

The more experience I had, the more I knew, and the more I knew that I didn’t know.

In 2002 I left hands on programming to get an MBA and then wield it in PowerPoint decks at Microsoft. For almost a decade I wasn’t a programmer. I managed teams that included developers, but I didn’t write a single line of shipping code.

At some point I started some after hours tinkering with the web and realized how much I missed the hands on experience of building something. Slowly it sucked me back in. By 2010 I had left Microsoft and was spending a lot of my time programming. Code is not all I do, but I do enough of it to call myself a programmer again.

This time round I still feel that same programmer angst, but with much less intensity. Yes, new tools and technologies are emerging every day, and yes, it is impossible to keep up with all of them. But here’s the thing I realized: you don’t need to. In fact, I think a large part of success is ignoring much of the cacophony about new new things, and focusing on the specific thing you are trying to achieve with the technology.

For evidence that you don’t need an encyclopedic knowledge of current programming technology to have impact, just look to some of the biggest programming success stories. In the early ’90s, in his early twenties, Marc Andreessen wrote the Mosaic web browser. In the early ’00s David Heinemeier Hansson created the Rails web framework. He was in his early twenties. At around the same time Matt Mullenweg released the first version of WordPress before he turned twenty. Mark Zuckerberg was about twenty when he founded Facebook in 2004.

I’m pretty sure that even as Marc, David, Matt and Mark were bringing these things into the world there were older developers somewhere talking about how hard it was to keep up with all the new developments in programming.

Hoodie Not Required

Many people see the success of young developers and draw the conclusion that youth is the essential ingredient. That a young mind, the absence of obligations, and the ability to spend every waking minute coding are prerequisites for success. This isn’t just the point of view of grumpy 30+ year old programmers like me, it is a pattern that has been embraced by prominent early stage investors. But like many patterns, it is one based on correlation, not causation.

The cause here has much more to do with the outdated expectations that programmers have about their career paths. The old model was that you focused on learning the basics in the beginning of your career and then over time transitioned to practicing your craft. Later in your career, your productivity and value lay in your experience.

But the days of earning some sort of tenure as a wise old programmer are gone. Whether you are 25 or 55, the playing field is pretty much level. Sure, older programmers have experience to draw on — inexperienced programmers sometimes look like cartoon characters with legs spinning wildly, but making no forward progress – but that knowledge based on experience is also a burden. It constrains your thinking, convincing you that things can’t be done. Young programmers don’t know what can’t be done. They just do it.

Programmer Half Full

Intellectual vocations with very little dependence on physical vitality are not new to modern civilization. What makes programming different is that the underlying practices and technologies are changing so fast that there is very little static foundation to speak of.

The reality, then, is that long term success as a programmer involves being focused on the task in front of you, but also aware of the changes taking place in your peripheral vision. You need the discipline to stay focused on being productive with your current toolset, often much longer than current vogue dictates. At the same time you need to have an open mind and sufficient awareness of new technologies to selectively and periodically start from scratch.

It is up to individuals to decide what this means for them. Do you enjoy reinventing yourself every few years? Are you comfortable in an environment where people 20 years younger than you are your technical peers? Is the idea of true, lifelong learning exciting, or is it exhausting? If not, then a lifetime spent programming is perhaps not for you.

The good news in all this is that you don’t need to spend a lifetime programming to be successful doing it. Take a decade off and do something else if you like. You can always return. If you do, it won’t take you long to get into the hot new programming language, framework and platform. Before long, you’ll know enough to change the world.

The UI Controls Pendulum

Earlier this month Brent Simmons wrote about the advantages of using standard instead of custom controls in Mac and iOS apps. He points to the cost of developing custom controls and questions their benefit. Standard controls, on the other hand, have features like accessibility built in.

This makes a lot of sense. Just take a look Marco Arment’s new podcast app to see standard controls in a beautiful, functional UI. But I wonder whether this isn’t a cyclical thing tied to the lifecycle of the OS user interface design.

The first iteration of Apple’s iOS user interface was in market for about 7 years before the major refresh that happened in iOS 7. There were design changes during the first 6 major versions, but they were relatively small and evolutionary. The platform owner, Apple in this case, won’t make these major changes often, partly because they are hard and costly, but mostly because they don’t want to undermine the familiarity of the design among their users.

Early in the life of a new UI design language there is pretty much only upside to using fresh and new standard controls in your app. In fact, in the really early stages of an OS design refresh the apps that use standard controls are distinctive and in the minority. People will seek them out to get the new hotness. And the halo of freshness might extend well beyond that. We’re 10 months into iOS and it still feels new.

Later in the life of an OS design language this balance starts to shift. Now almost every app, even very crappy ones, are using the standard controls. And using them well. The great app makers who used the standard controls effectively have been copied by everyone else.

Also, the design language no longer feels fresh. Rather, it feels old fashioned. Staid even.

So at some point in the cycle custom controls start to become valuable again. Apps that use them effectively will stand out and will be hard to copy. Consider the discussions about TweetBot’s famously custom UI, or the raving about Loren Brichter’s beautifully simple Letterpress design. In the latter half of the life of the original iOS design it became positively passé to rely on standard controls for your app.

Every app developer will design UI in the context of their own situation — their goals, budget, capabilities, and brand. Some will invest more than others in being unique. But wherever they are on the continuum, it would probably be good to also factor the OS lifecycle into design decisions.

Satya Nadella’s Email, Decoded

I was very excited when I read Satya Nadella’s recent public email message about his direction for Microsoft. I left the company in early 2010, frustrated with its direction. The email seemed to confirm what I had hoped about his appointment as CEO.

Then I saw Jean-Louis Gassée’s critique of the message and realized that I had read Satya’s words through Microsoft goggles. Having lived the internal corporate process that takes a few strong, simple ideas and makes them into thousands of words of compromised language, I had subconsciously decoded his message.

Here’s the way I interpreted Satya’s words. I’m not saying this is the email he could have or should have written. It is simply the words I heard as opposed to the ones he wrote.

In Apple the world has a company that is about simplicity. A company that in its DNA is about things that are beautiful and easy to use. But it also needs a company that is about productivity.

Microsoft is that company.

Our DNA is about getting things done and making products that help people to get things done. Sometimes those products are less beautiful and more complicated, but they do more. We do more. That’s who we are.

Our mistake in the past was thinking that we needed to shift our focus from businesses to consumers. And that we could make this shift with the same essential strategies as before, and by continuing to bet heavily on our client OS. This approach simultaneously took us away from our strengths and kept our thinking in the past.

Instead, we will do three things.

One, we will unapologetically focus on productivity. Of course this means business productivity, but it also means personal productivity. As people embrace mobile and cloud they will become more demanding and more sophisticated. They will be less afraid of the technology. They will want more. We will give them more.

Two, we will shift our platform priorities from the client OS to the cloud. Today the platform has exploded out of client devices and is in the cloud. That’s where the core of our platform effort will be. It will drive user experiences that are delightful on devices, but also across devices. The Windows and Windows Phone client OSs are important, but they are secondary to the platform we are creating in the cloud.

Three, our user experiences will start where most people use computing today: on their phone. All experiences are important, but mobile experiences are most important. Our thinking will be mobile-first.

We’ll need to make a lot of changes. Changes in our strategy, in our products and in our organization. Change is hard, but this change will be made easier by the fact that we’ll be returning to what we do best. And by the fact that in parts of Microsoft, like our Azure cloud platform, it has been underway for some time. Our changes will not sacrifice a current position of strength. They will take us to a position of greater strength.

Native Versus Web: A Moment In Time

Whenever I see punditry about web versus app, or worse, web versus mobile, I see a graph in my mind’s eye. It is inspired by disruption theory, but I’m not a deep student of Christensen’s work so please don’t read his implicit support into anything I’m writing here.

Skipping to the punchline, here’s that graph.

For those who don’t find it self-explanatory, here’s the TLDR;

Quality of UX Over Time

It starts with a simplistic graph of quality over time for app user experience on computers. Sometime in the early ’80s apps went mainstream in the sense that many consumers were using and purchasing them.

The exact date really isn’t important. The point is that apps exceeded a certain threshold in quality when they became “good enough” for mass adoption. An exact definition of “quality” also isn’t important, but it would encompass all aspects of the user experience in a very broad sense. That is, not only ease of use in the user interface, but also things like installation and interoperation with hardware and other software.

To further state the obvious:

  • This curve is conceptual, not based on accurate measurements of quality.
  • This is an aggregate curve, but curves would be different for different classes of application.
  • “good enough” might not be constant, and is probably rising over time along with our expectations.

The Internet Arrives

At some point in the mid to late ‘90s the web was good enough for a lot of things and started inflating the first tech bubble as people switched to lower cost web apps. By the mid 2000’s, the web had all but destroyed the economy for consumer native apps on Windows. With the exceptions of hard core gaming, productivity software and professional apps, most new experiences were being delivered via web apps. Even in those areas, web based alternatives were starting to challenge the entrenched native app incumbents (e.g. Google Docs versus Microsoft Office).

iPhone and a New Set of Curves

In 2006 Steve Jobs unveiled the iPhone and opened the door to mainstream mobile computing. Initially 3rd party apps could only be web based, but soon there was an SDK and native apps were possible. What followed was a remarkable explosion of innovation and consumption as the iPhone — and then Android — liberated our connected computing experiences from desktop and laptop PCs.

This is where things got interesting. The fundamentally different user experience offered by the opportunities (ubiquity, location) and limitations (size, network quality) of mobile devices meant that we had jumped to a new set of quality curves. And on these new curves we had regressed to a situation where the web wasn’t good enough and native apps were once again dominant.

Today native apps offer a far superior experience in most cases. There are some exceptions, like news, where the advantages of surfing from web page to web page reading hyperlinked articles outweighs the performance benefits of installing all the apps for your favorite news sites. But for the most part the native apps are better. Will this be true forever?

I think that’s a risky bet to make. It’s not a question of if, but rather when web apps will be good enough for most purposes on mobile devices. The words “good enough” are important here. Although my graph has web apps overtaking native apps at some point, maybe they don’t need to. It is possible that good enough, combined with cost savings driven by developing apps without platform imposed restrictions and without revenue share arrangements (e.g Apple’s 30% share of developer revenues), will be sufficient to send the mainstream back to web apps.

The red arrow I have added in this final graph represents the reason I think so many observers draw the wrong conclusions. Web apps went from unquestioned supremacy on PCs to second rate experiences on mobile devices. At the same time, good native apps on mobile devices catalyzed the explosion of the smart phone market. If you only look at a small piece of the elephant you can misinterpret what you see, and in this case people are misinterpreting this moment in time shift from web to mobile, or from web pages to apps, as unidirectional trends.

Web and native are just two different ways of building and distributing apps. They both have advantages and disadvantages. Right now, in the relatively early stages of the phone as computing device, the web isn’t a good enough platform to deliver most mobile experiences. Web apps on phones are jerky and slow and encumbered by browser chrome that was inherited from the desktop. They don’t have good access to local hardware resources. But the trajectory is clear — things are improving rapidly in all these areas.

The Web is Independence

All of the above is predicated on the assumption that a distinction between web and native apps will still make sense in the future. Perhaps the future proof way to look at this is to view “native” as a proxy for experiences that are delivered via a platform owner acting as gatekeeper, and the “web” as a proxy for experiences that are delivered to consumers across platforms, with seamless interconnectivity, and unencumbered by the restrictions put in place by platform owners.

Put differently, the web represents independence from platform owners. It offers incredible freedom to build what you to want build, and to ship when you are ready to ship, without any gatekeepers. While I love my native apps today, I believe in the long term potential of this freedom. Other problems that native app stores solve today — and there are many, like discoverability, security and peace of mind — will be solved on the web in ways that won’t require a platform overlord.

By @xkcdComic, via @duppy:

Taming Tweetstorms

Tweetstorms have been around for a while, but have recently gone meta with a lot of discussion about their problems and merits. Including, of course, tweetstorms about tweetstorms. Some folks have presented solutions for making their creation easier (e.g. Dave Winer’s Little Pork Chop), but the problem is much more about their consumption. Although people often appreciate their content, Tweetstorms are widely regarded as obnoxious and intrusive. The comment that captured it best was one by @kirklove, who likened them to the “loud person at a cocktail party”.

The other problem is that using the reply mechanism to connect the tweets is causing issues with Twitter’s UI for threading replies. And even if that wasn’t the case, the more replies there are to individual tweets in the storm, the more difficult it gets to follow the storm.

Emergent features aren’t always good ideas, but in this case it does seem that Twitter has been presented with an opportunity to break out of the 140 character limit without losing the beauty and simplicity of constraining an individual thought to 140 characters. Tweetstorms are limited, but they can summarize a topic, lay out the logic of an argument or convey a narrative.

From a user experience perspective, the fundamental thing that Tweetstorms long for in Twitter’s model is a way to create relationships between tweets, other than the reply mechanism. Were Twitter to add this to the model, here’s how it might work. I’m going to use one of Fred Wilson’s recent tweetstorms as the example.

I will start with the consumption experience. Here’s what you will see in your timeline when Fred posts his Tweetstorm:

The only way this is different from any other tweet is the “1 of 7″ on the top right. Clicking or tapping that element will expand the storm and you will see the following:

Each tweet now reflects its position in the storm with a number at the top right.  You can collapse the storm from the last tweet. The important thing to note here is that the individual tweets behave exactly the way they do today. For example, you can reply to individual tweets and “View conversation” expands the reply thread for that tweet.

Here’s how Fred would create the storm. First, he writes a single tweet:

That “+” is the only new element, and it is only seen by Fred, the author of the tweet. Clicking or tapping the “+” opens the post form and the new tweet Fred creates becomes the second one in the storm. He then sees the following:

Now both tweets are numbered. Fred can tap the “+” on the second tweet to add a third. He can also another tweet to this storm by tapping the “+” on the first tweet and the new tweet will be inserted between the two existing ones. Deletion of any tweet in the storm, including the first one, just reorders the remaining tweets.

Fred doesn’t have the ability to collapse his own tweetstorm in his timeline.

This is a crude mockup and far from a complete solution. The main point I’m trying to illustrate is that the core of the tweetstorm is a new type of relationship between tweets, and the challenge is to make this relationship easy to create and (more importantly) consume. I think the goals of any solution should include these two:

  1. Lower the impact of the storm on the consumption experience. I’m doing that above by collapsing the storm by default. As a consumer, it is my decision to allow the storm to take over my timeline.
  2. Minimize the impact of the solution on the current mechanics of individual tweets. Aside from being collapsed under the first tweet by default, every tweet in the storm above is a first class tweetizen.

Just in case I have given the impression that adding a new type of relationship between tweets will be relatively easy to do — it won’t. To a system operating at massive scale like Twitter does, even small changes have far reaching consequences. And isn’t a small change. Not to mention all the edge cases and unintended consequences in the user experience that need to be considered. For example, one key challenge will be to resolve confusion between tweet storms and reply threads (this is a weakness in my mockup above).

Finally, apologies to the Twitter design team for brutalizing their UI with this mockup. I love Twitter as a product and there is no disrespect intended.

The Future Of Communication

The Myth of Customer Acquisition

How did you “acquire” that customer? Did they make one or more purchases? Did they sign up to your service? Did they give you their email address?

Sorry, but that isn’t worth a lot. They will buy the next thing somewhere else. They are already signing up to another service. If your email doesn’t automatically get sent to the junk folder, they will ignore it.

You don’t acquire a customer. You get access to their attention for a short period of time. You compete with many other things for each of these periods. The number of things competing for each period is getting larger all the time. The number of things doing exactly what you do, or selling exactly what you sell, is probably increasing too. And the periods themselves are getting shorter.

All is not lost. There are two things you can do.

First, you can compete for one quantum of attention. One visit from search, via an article that mentions you, from a social media share, or your viral video. Or much more valuable, via word of mouth. When you win that elusive quantum of attention, be sure to use it well. Present a way to solve a pressing problem. Be useful. Elicit joy.

Second, you can build trust. Every quantum of attention you win is an opportunity to play the long game. To build, with each small contact, an enduring relationship. The more trust there is, the more likely it is that you will get future attention. That your product will be heard above the cacophony. That your email will be read. That you will get more chances to transact, and to earn even more trust.

There are no “new” customers you can acquire, and you don’t have any existing customers to retain. There are people that trust you and people that don’t. You have to compete to win their attention either way. So throw away your models of customer acquisition and retention. Compete like crazy for each quantum of attention. And focus on using every one that you win to fulfill needs and build trust.

Feedly: Biting the Hand that Feeds

With the demise of Google Reader many publishers recommended to readers that they check out Feedly. Now they might be regretting that decision. Feedly seems to have taken a turn that is decidedly unfriendly to publishers. This started with them pointing URL shares to their own copy of the publisher’s content, instead of the publishers original content. Now they appear to be stripping publishers’ clickthrough tracking tags and replacing them with their own.

Feed Wrangler’s David Smith, a Feedly competitor, has spoken out against these practices and others that might be good for the feed reader service or provide a better experience to the end user, but are harmful to publishers.

The whole thing is worth reading, but here’s a little excerpt I found particularly interesting:

I am not, nor should I be, in the business of making unilateral changes to the chosen business models of content publishers. As a reader I love it when publishers provide full-text, unencumbered feeds for all of their articles. I understand, however, that not all publications choose this route. When they don’t, I believe my responsibility as a reading service is to respect that choice.

This statement not only sets Feed Wrangler apart from Feedly’s questionable practices, it actually sets them apart from read-it-later services like Instapaper and Pocket. RIL apps are definitely in the business of (as David puts it) “inserting [themselves] into the experience the writer had decided to present to their audience”. They are without a doubt improving the reading experience in many cases, but arguably doing so at the “expense of the publisher’s relationship to their readers”.

There is a continuum of businesses that are built on top of the work done by writers and publishers. Instapaper is not completely benign, but is definitely closer to that end of the continuum than the increasingly malignant Feedly. For publishers the question is what to do about the continuum as a whole. As I’ve said in the past, I believe the answer — easier said than done, I know — is for publishers to provide an experience so compelling that readers want to consume their content in situ and not on someone else’s platform.

The question is whether publishers can find an economic model to support this. Ads, at least in the form of their current race to increasing distraction and intrusiveness, don’t seem to be it.

Judge, Jury and Social Media Executioner

It seems more and more often that we see people shamed on social media. They do something greedy, stupid, or bigoted, or a combination of all three, and it comes to the attention of someone who’s internet famous. Tweets go out, virtual mobs assemble, and pretty soon there are real world consequences. Firings, defacement of property, physical violence and death threats are not uncommon.

Of course there are also the apologists who step in to play devil’s advocate, defend the offender, or just straight up troll the mob and its leaders. Sometimes the retaliation is more violence directed back towards the shamers and the offender’s original victim.

And then there are people who step in to point out that mob justice doesn’t seem right, no matter how egregious the original offense. But by this time the issue is too polarized for a nuanced point of view and these people are lumped in with the apologists. Complicating matters is that they are almost indistinguishable from some of the more sophisticated apologist trolls.

Absent the heat of the moment, I think most reasonable people would acknowledge that this isn’t a good way to bring justice to any situation. It looks way too much like a vigilante lynch mob. And even if the offender is guilty in most of the cases, we all know that it’s only a matter of time before some innocent is scarred for life, their personal and professional reputations destroyed in a way that no number of retractions can repair.

How many true offenders must be brought to justice in order to justify the cost of one false positive?

This is an old problem — addressed in the offline world by a complex architecture of civil society and legal systems — but set in a new borderless, online world that has little regulation. My intuition recoils at the idea of government regulating behavior on the social web, but given human nature and the capacity for groups to do damage to individuals, its hard to imagine a future where this isn’t happening.

In the meantime we should take a more critical look at online, crowdsourced justice. In particular, the influential, apparent do-gooders who are in many cases stirring the mobs to action. Most of them probably have good intentions, but some seem to enjoy the role too much. They ride the powerful wave of online emotion to even greater influence. Rinse. Repeat.

We should exercise more restraint before seeking justice through social media. Is there a way of solving the problem by speaking to the offender directly? Is it possible that our tweet could lead to a disproportionate response? Is the individual in our sights just the unlucky drop over the full bucket, or are they truly responsible for the full extent to which we were wronged?

I’m sure that a lot of people who take their conflicts to Twitter or Facebook get no response at all. But some get a response far beyond their expectation or desire. In addition to the pain caused by the original offense, they may have to live with the guilt of a disproportionate response that ruined someone’s life.

Apple’s M7: Filling the Gaps for Indoor Location Estimation

I’ve seen and heard a lot of talk about Apple’s new M7 motion co-processor and iBeacon feature after the iPhone announcement earlier this week, most of it pointing out that these features lay some of the foundation for indoor mapping. But I haven’t seen anything explaining exactly why they have this significance. It has been several years since I worked in this space, but here is some speculation of my own.

Your Are Here

First, some background. GPS doesn’t work indoors. Even if it did, it wouldn’t be sufficiently accurate for useful indoor mapping. Cellular tower triangulation does work indoors of course, but is much less accurate than GPS.

Another option is to triangulate position using data about Wi-Fi hotsposts in the vicinity. The iPhone has had this capability for some time, initially using Skyhook’s data, but switching to Apple’s own data in 2010. Apple also recently acquired WifiSLAM, which had “developed ways for mobile apps to detect a phone user’s location in a building using Wi-Fi signals”.

Then there is iBeacon, Apple’s support for Bluetooth Low Energy (BLE). This enables BLE devices in the vicinity of an iPhone to wake them up and give them some information. That information could be location, so you could imagine these little BLE devices scattered around the mall telling nearby iPhones “Hey! You are here.”

Or Are You?

These are important steps forward for indoor mapping, but probably not the whole solution. Even in a future where there is a lot of Wi-Fi data on hand and iBeacons installed all over the place, the system would be largely outside Apple’s control and probably unreliable.

Add to the challenge that we’ll be hard to please, with high expectations for accuracy and reliability for indoor location. We think at a higher resolution indoors — important differences are measured not in city blocks, but in yards or even feet. Small mistakes and inaccuracies will be frustrating, even if they only happen once in a while.

Enter motion sensors and the M7.

Filling the Gaps

If you have one good position measurement you can use motion sensors (accelerometer, gyro and compass) to measure your subsequent movements and keep track of your position. This can’t be the whole solution for two reasons:

  1. A lot rests on the accuracy of the initial position measurement. The last GPS measurement before losing the satellite is probably not good enough.
  2. Even if you can get a good initial measurement, every subsequent adjustment based on motion will have a tiny estimation error. These errors will accumulate over time and at some point the estimated position will be too inaccurate to be useful.

But it is conceivable that iBeacon- and Wi-Fi-based positioning will eliminate these problems for many indoor areas.

There is one remaining, huge, deal-breaking challenge for the approach of using motion-based positioning to fill in the gaps: It requires the motion hardware to be continuously measuring and processing your movements. Pre-5S, this would have drained the battery in short order. The M7 co-processor that Apple announced on Tuesday solves this problem.

Leading the Way

None of these technologies alone will make reliable indoor location possible, but together they might. It will probably take a while before we see indoor mapping in a keynote. There is lots of infrastructure involved that isn’t under Apple’s control (deploying all those beacons, for example). And we have to wait until the majority of iPhones in market have M7 and iBeacon support.

Also, Apple needs its competitors to support BLE with iBeacon-like features, because malls and airports and sports stadiums probably won’t be installing thousands of beacons for iPhones alone. So the ability to listen to beacons isn’t an opportunity to differentiate the iPhone.

Where they can differentiate, though, is in the gaps.

I think on Tuesday Apple revealed a key part of their strategy. Most of the subsequent talk about the M7 is focused on fitness oriented scenarios, and doing clever things when your phone detects certain activities (my car stopped, I sat down, etc.). Those might lead to nice apps and snazzy features, but I’m betting that Apple is interested in something much more fundamental: leading in the next frontier for mapping and location — indoors.