The Myth of Customer Acquisition

How did you “acquire” that customer? Did they make one or more purchases? Did they sign up to your service? Did they give you their email address?

Sorry, but that isn’t worth a lot. They will buy the next thing somewhere else. They are already signing up to another service. If your email doesn’t automatically get sent to the junk folder, they will ignore it.

You don’t acquire a customer. You get access to their attention for a short period of time. You compete with many other things for each of these periods. The number of things competing for each period is getting larger all the time. The number of things doing exactly what you do, or selling exactly what you sell, is probably increasing too. And the periods themselves are getting shorter.

All is not lost. There are two things you can do.

First, you can compete for one quantum of attention. One visit from search, via an article that mentions you, from a social media share, or your viral video. Or much more valuable, via word of mouth. When you win that elusive quantum of attention, be sure to use it well. Present a way to solve a pressing problem. Be useful. Elicit joy.

Second, you can build trust. Every quantum of attention you win is an opportunity to play the long game. To build, with each small contact, an enduring relationship. The more trust there is, the more likely it is that you will get future attention. That your product will be heard above the cacophony. That your email will be read. That you will get more chances to transact, and to earn even more trust.

There are no “new” customers you can acquire, and you don’t have any existing customers to retain. There are people that trust you and people that don’t. You have to compete to win their attention either way. So throw away your models of customer acquisition and retention. Compete like crazy for each quantum of attention. And focus on using every one that you win to fulfill needs and build trust.

Feedly: Biting the Hand that Feeds

With the demise of Google Reader many publishers recommended to readers that they check out Feedly. Now they might be regretting that decision. Feedly seems to have taken a turn that is decidedly unfriendly to publishers. This started with them pointing URL shares to their own copy of the publisher’s content, instead of the publishers original content. Now they appear to be stripping publishers’ clickthrough tracking tags and replacing them with their own.

Feed Wrangler’s David Smith, a Feedly competitor, has spoken out against these practices and others that might be good for the feed reader service or provide a better experience to the end user, but are harmful to publishers.

The whole thing is worth reading, but here’s a little excerpt I found particularly interesting:

I am not, nor should I be, in the business of making unilateral changes to the chosen business models of content publishers. As a reader I love it when publishers provide full-text, unencumbered feeds for all of their articles. I understand, however, that not all publications choose this route. When they don’t, I believe my responsibility as a reading service is to respect that choice.

This statement not only sets Feed Wrangler apart from Feedly’s questionable practices, it actually sets them apart from read-it-later services like Instapaper and Pocket. RIL apps are definitely in the business of (as David puts it) “inserting [themselves] into the experience the writer had decided to present to their audience”. They are without a doubt improving the reading experience in many cases, but arguably doing so at the “expense of the publisher’s relationship to their readers”.

There is a continuum of businesses that are built on top of the work done by writers and publishers. Instapaper is not completely benign, but is definitely closer to that end of the continuum than the increasingly malignant Feedly. For publishers the question is what to do about the continuum as a whole. As I’ve said in the past, I believe the answer — easier said than done, I know — is for publishers to provide an experience so compelling that readers want to consume their content in situ and not on someone else’s platform.

The question is whether publishers can find an economic model to support this. Ads, at least in the form of their current race to increasing distraction and intrusiveness, don’t seem to be it.

Judge, Jury and Social Media Executioner

It seems more and more often that we see people shamed on social media. They do something greedy, stupid, or bigoted, or a combination of all three, and it comes to the attention of someone who’s internet famous. Tweets go out, virtual mobs assemble, and pretty soon there are real world consequences. Firings, defacement of property, physical violence and death threats are not uncommon.

Of course there are also the apologists who step in to play devil’s advocate, defend the offender, or just straight up troll the mob and its leaders. Sometimes the retaliation is more violence directed back towards the shamers and the offender’s original victim.

And then there are people who step in to point out that mob justice doesn’t seem right, no matter how egregious the original offense. But by this time the issue is too polarized for a nuanced point of view and these people are lumped in with the apologists. Complicating matters is that they are almost indistinguishable from some of the more sophisticated apologist trolls.

Absent the heat of the moment, I think most reasonable people would acknowledge that this isn’t a good way to bring justice to any situation. It looks way too much like a vigilante lynch mob. And even if the offender is guilty in most of the cases, we all know that it’s only a matter of time before some innocent is scarred for life, their personal and professional reputations destroyed in a way that no number of retractions can repair.

How many true offenders must be brought to justice in order to justify the cost of one false positive?

This is an old problem — addressed in the offline world by a complex architecture of civil society and legal systems — but set in a new borderless, online world that has little regulation. My intuition recoils at the idea of government regulating behavior on the social web, but given human nature and the capacity for groups to do damage to individuals, its hard to imagine a future where this isn’t happening.

In the meantime we should take a more critical look at online, crowdsourced justice. In particular, the influential, apparent do-gooders who are in many cases stirring the mobs to action. Most of them probably have good intentions, but some seem to enjoy the role too much. They ride the powerful wave of online emotion to even greater influence. Rinse. Repeat.

We should exercise more restraint before seeking justice through social media. Is there a way of solving the problem by speaking to the offender directly? Is it possible that our tweet could lead to a disproportionate response? Is the individual in our sights just the unlucky drop over the full bucket, or are they truly responsible for the full extent to which we were wronged?

I’m sure that a lot of people who take their conflicts to Twitter or Facebook get no response at all. But some get a response far beyond their expectation or desire. In addition to the pain caused by the original offense, they may have to live with the guilt of a disproportionate response that ruined someone’s life.

Apple’s M7: Filling the Gaps for Indoor Location Estimation

I’ve seen and heard a lot of talk about Apple’s new M7 motion co-processor and iBeacon feature after the iPhone announcement earlier this week, most of it pointing out that these features lay some of the foundation for indoor mapping. But I haven’t seen anything explaining exactly why they have this significance. It has been several years since I worked in this space, but here is some speculation of my own.

Your Are Here

First, some background. GPS doesn’t work indoors. Even if it did, it wouldn’t be sufficiently accurate for useful indoor mapping. Cellular tower triangulation does work indoors of course, but is much less accurate than GPS.

Another option is to triangulate position using data about Wi-Fi hotsposts in the vicinity. The iPhone has had this capability for some time, initially using Skyhook’s data, but switching to Apple’s own data in 2010. Apple also recently acquired WifiSLAM, which had “developed ways for mobile apps to detect a phone user’s location in a building using Wi-Fi signals”.

Then there is iBeacon, Apple’s support for Bluetooth Low Energy (BLE). This enables BLE devices in the vicinity of an iPhone to wake them up and give them some information. That information could be location, so you could imagine these little BLE devices scattered around the mall telling nearby iPhones “Hey! You are here.”

Or Are You?

These are important steps forward for indoor mapping, but probably not the whole solution. Even in a future where there is a lot of Wi-Fi data on hand and iBeacons installed all over the place, the system would be largely outside Apple’s control and probably unreliable.

Add to the challenge that we’ll be hard to please, with high expectations for accuracy and reliability for indoor location. We think at a higher resolution indoors — important differences are measured not in city blocks, but in yards or even feet. Small mistakes and inaccuracies will be frustrating, even if they only happen once in a while.

Enter motion sensors and the M7.

Filling the Gaps

If you have one good position measurement you can use motion sensors (accelerometer, gyro and compass) to measure your subsequent movements and keep track of your position. This can’t be the whole solution for two reasons:

  1. A lot rests on the accuracy of the initial position measurement. The last GPS measurement before losing the satellite is probably not good enough.
  2. Even if you can get a good initial measurement, every subsequent adjustment based on motion will have a tiny estimation error. These errors will accumulate over time and at some point the estimated position will be too inaccurate to be useful.

But it is conceivable that iBeacon- and Wi-Fi-based positioning will eliminate these problems for many indoor areas.

There is one remaining, huge, deal-breaking challenge for the approach of using motion-based positioning to fill in the gaps: It requires the motion hardware to be continuously measuring and processing your movements. Pre-5S, this would have drained the battery in short order. The M7 co-processor that Apple announced on Tuesday solves this problem.

Leading the Way

None of these technologies alone will make reliable indoor location possible, but together they might. It will probably take a while before we see indoor mapping in a keynote. There is lots of infrastructure involved that isn’t under Apple’s control (deploying all those beacons, for example). And we have to wait until the majority of iPhones in market have M7 and iBeacon support.

Also, Apple needs its competitors to support BLE with iBeacon-like features, because malls and airports and sports stadiums probably won’t be installing thousands of beacons for iPhones alone. So the ability to listen to beacons isn’t an opportunity to differentiate the iPhone.

Where they can differentiate, though, is in the gaps.

I think on Tuesday Apple revealed a key part of their strategy. Most of the subsequent talk about the M7 is focused on fitness oriented scenarios, and doing clever things when your phone detects certain activities (my car stopped, I sat down, etc.). Those might lead to nice apps and snazzy features, but I’m betting that Apple is interested in something much more fundamental: leading in the next frontier for mapping and location — indoors.

Faith in The Job Unknown

This morning Horace Dediu, writer at the excellent Asymco blog1, made the following statement:

I recently tweeted that any discussion related to wearable technology needs to begin with a description of the job it would be hired to do. Without a reason for building a product, you are building it simply because you can.

There is truth to this. Often talk of new products seems to ignore the realities of making something that people will find useful. Ten years ago people would try to ground any discussion about a potential product by asking the question “What problem does this new thing solve?”. These days it is more fashionable to ask “What job will the user hire this new thing to do?”. The latter is definitely an improvement. A problem can be a powerful motivator, but many big innovations are more about creating new opportunities than solving an existing problem.

That said, while I agree this is a very useful tool for teasing out the viability of incremental new products, it can be counterproductive as a prerequisite for discussions on a topic like wearable computing. For a well defined smartwatch product, yes, the specific job is important. And to be fair, this kind of detailed analysis is what Asymco does so well. But we are too early in the life of wearable computing to apply the same rigor.

Part of the reason some breakthroughs are made is precisely that the pioneers weren’t constrained by prosaic concerns about the mainstream or commercial utility of what they were working on. If they had been, it might have limited their thinking. Imagine if the Homebrew Computer Club had embraced the rule in Dediu’s statement. Sounds ridiculous, right? That’s because the job something will be hired to do is often the furthest thing from the minds of the people who are breaking new ground. They are doing it because it hasn’t been done before, or because doing it is exciting, or because other people they like or respect are doing it, or even… just because they can.

Ok, that much seems obvious, but I believe it goes further than just the Wozniaks of the world and also applies to the product innovators who drive new product categories to market.

For one thing, people who changed the world with technology, like Steve Jobs or Bill Gates, often didn’t ground their thinking in specific jobs as a first principle. Their intuition that their product had profound value drove them even when they didn’t know exactly what that value was.

A computer on every desk and in every home” seemed like overreach in 1980. Without mainstream networking and then the Internet, most of the value we currently experience from our computers wasn’t even part of Gates’s thinking. In fact, as late as 1995 he famously overlooked the Internet in his book about the personal computing revolution, “The Road Ahead”. The reason Gates believed so strongly in personal computing wasn’t because of specific jobs. It was the intrinsic potential he saw in a general purpose device that could be programmed by others.

Secondly, when people do talk about the jobs that a new wave of technology will do, they are seldom talking about the actual jobs that people end up hiring the technology to do. Commerce has accelerated the web into the mainstream, but I doubt anyone working on the HTTP protocol or the Mosaic web browser was imagining banner ads or Amazon.com. Arguably the single most important job the smartphone does today is provide a 4″ touchscreen canvas for developers to do almost anything they want, yet Apple didn’t even think that native programming was important when it first launched the iPhone.

So…

Should technologists work on platform level technologies without a specific knowledge of the products they will enable? Yes. In fact, this is where many high risk, high return opportunities lie. It is completely reasonable to approach something like wearable computing from the broad opportunity and work your way in. Does anyone believe that a small brick like the iPhone is as personal as computing will get? No. Do we have ideas for how it might get more personal. YES. It will know more about us (sensors), it will know more about our immediate environment (sensors), it will be easier to carry, more disposable and more convenient (form factor). That’s enough to get started.

Should analysts explore the potential future market for wearable computing without any idea what the products will actually do? Yes. There are many interesting discussions around the size this market might be and the way it might impact other existing product categories. Admittedly most of this analysis will only be useful for things like establishing upper bounds or playing out extreme scenarios, but that’s useful.

What specific jobs will wearable computing do to change the world? We don’t know that yet. What we can do is wave our hands a little. We can say that wearable computing will (1) allow us to take some of the value we see in current computing modalities into new situations and (2) will open whole new areas of opportunity by being more personal and more ubiquitous.

Now, maybe these two things satisfy the requirement for specifying a “job”. But if they do, then some simple practical objections to the “job test” emerge. What qualifies as a job? Who gets to decide the level of specificity that is required? Who makes the judgment on which jobs are legitimate? And would they have given the green light for the web browser based on http://icanhas.cheezburger.com/?

Of course, none of this means that we shouldn’t talk about the jobs we hire wearable computing to do. It can be very productive to ground thinking in ideas about specific opportunities that new products might create, or problems that they might solve. Often, iterating between small, specific ideas and big, vague opportunities is particularly good at accelerating the process.

But a “description of the job” is an inappropriate prerequisite for the discussion.

Although many of us have lived in pre-computing, pre-Internet, pre-mobile worlds, we have forgotten what those worlds were like. Everything that seems so obvious in hindsight, was completely invisible to us then. Mostly even invisible to the people who changed the world. In the future we will look back and think that our pre-wearable computing world was rather quaint, and we will marvel at the ways it has surprised us. Any ability we have to look into that future is compromised by the application of rigorous specificity as a precondition to the discussions that imagine it.

    1. Asymco is my answer to the question “If I only read one blog on mobile computing business and strategy, what should it be?”

 

iWatch is a Proxy for Wearable Computing

I don’t know anything about Apple’s plans regarding iWatch. From my perspective the interesting questions aren’t around what Apple will do, and when they will do it, but rather around the opportunity for products in a new computing modality that is more available and more personal than the phone.

That there is an opportunity is quite obvious because computing is such a long way from being available in all situations. Consider…

Carry Options

Before smartphones (in the mainstream this means “before iPhone”) you needed a bag in order to take your computing with you. Now that we have smartphones, we only need a pocket.

But what about situations where we don’t have a suitable pocket or bag? On the beach in a speedo or clubbing in a little black dress.

How much functionality would you forgo if you could have a device that took computing from 95% of your daily situations to 99% of those situations?

Disposability

If you destroy your iPhone and you aren’t paying for insurance, then it will cost you hundreds of dollars for a pre-owned replacement. That’s a lot of money to the overwhelming majority of people, who will think twice before taking their indispensable device boating, cycling or bar hopping. Or taking it somewhere where it might be stolen.

Many people won’t think about this much until they destroy their first phone, and then after that they will think about it every time they use it on an escalator.

How much functionality would you forgo for a device that was disposable, in the sense that if it was lost or destroyed it would be no more inconvenient or costly than losing a disposable camera?

Convenience

Have you ever found yourself using your iPhone when your iPad is less than 10 feet away? Or using your iPad when your MacBook Pro is in the next room?

Yes, we’re much lazier than we might have imagined. But the more generous, and probably more accurate explanation is that the value of immediate access to computing without interruption outweighs the need to see something in higher fidelity on a larger screen.

When there is a computing modality more ubiquitous and more immediately accessible that the phone, the iPhone will find itself on the other side of the trade off.

For many people telling time is NOT sufficient value to outweigh the inconvenience of reaching for your phone, as evidenced by the trend to lose the watch and use one’s phone to tell time. But this is not evidence that the watch form factor is a loser, it is evidence that time alone isn’t enough functionality to justify wearing one based on utility alone.

The hard, but necessary thing to do when thinking about iWatch is to forget about watches as we know them. For example, forget about telling time. Whatever this new thing is, it will give us the time. But that will represent a negligible fraction of the overall value that the device offers. Similar to the way the voice calling phone app only represents a small fraction of what the iPhone offers, but much more extreme than that.

Also, don’t assume that an iWatch is necessarily a wristwatch. That might be the best form factor, but it might not be.

Rather think about it in terms of taking the value of connected computing — all the value one can get from using apps and websites while connected to the Internet — and making that value available in the life situations where it currently isn’t available, and more conveniently available in the ones where it is.

How can iWatch be more wearable, more disposable, and more accessible than a smartphone? What smartphone functionality will we forgo for this increased availability?

Then, given a modality that is more personal and more ubiquitous than even the smartphone, what new opportunities will this open up? The early smartphones took traditional computing functionality and jammed it on the phone, but the greatest smartphone innovations are all about the new opportunities of immediacy and location that were created when the computer became something that was with you most of the time.

What new opportunities will emerge when the computer is with you ALL of the time, and is perhaps even physically connected to your body?

From this perspective Google Glass should be seen as one of Google’s forays into the space that an iWatch will enter. It is probably too delicate, too expensive and a little too creepy to be the mainstream success story in this space, but it is an interesting experiment nonetheless.

Microsoft’s SPOT Watch (later Smart Watch) was also interesting. At least a decade too early and a marketing impossibility, but the SPOT UI was excellent and the watches weren’t bad either. I was a watch snob back then and I was happy to wear a Suunto smart watch.

Recently we started to see watch-type devices that pair with smartphones. These are interesting, but the configuration complexity of needing two devices to get one thing done isn’t great. And of course, if you need your phone too then you’re not really getting any carry or disposability benefits.

The point of all this is twofold…

First, predicting that Apple will at some point do an “iWatch” of some sort is equivalent to predicting that they will still be in business when these new modalities roll around. Those making the predictions will be wrong until they are right. Just like iPhone.

Second, don’t think of this new thing as a watch. They might call it a watch, because “watch” is a word we know, one that conveys a lot of information (small, technology, personal), and one that doesn’t scare us with newness. Put differently, use of the word “watch” would be a marketing decision. Arguably a very smart one.

But this new thing represents something much more profound. It is a continuation of the trend to connect us more deeply to technology, and to each other via technology, with all the opportunity, wonder and potential for abuse that goes along with that.

Please Tell Me He Didn’t Actually Say That

This from Paul Graham in a recent Inc. interview when asked for a specific “tell” for poor likelihood of success when assessing YC applicants:

One quality that’s a really bad indication is a CEO with a strong foreign accent. I’m not sure why. It could be that there are a bunch of subtle things entrepreneurs have to communicate and can’t if you have a strong accent. Or, it could be that anyone with half a brain would realize you’re going to be more successful if you speak idiomatic English, so they must just be clueless if they haven’t gotten rid of their strong accent. I just know it’s a strong pattern we’ve seen.

At best, an irresponsibly vague and unsupported generalization. He is “not sure why” a foreign accent would impede a CEO. He doesn’t present any kind of evidence (statistically significant sample?) on which to base his conclusion. And what exactly is a “strong foreign accent”?

Perhaps more importantly, even if there is a big sample, who’s to say that this is causation and not just correlation? There might be another factor, like prejudice, that makes this bit of “wisdom” a self fulfilling prophecy. Of course, the true causal factors might be more benign, but that doesn’t change the underlying point.

Leaders should be working to make our communities more inclusive, not less so. Sure, a strong accent might make things harder. But let’s try to be more open minded and fight the instinct to judge someone’s competence or likelihood of success based on their proficiency in a language that isn’t their first, and might be their 3rd or 4th. Don’t rule them out with a lazy generalization. Find ways to compensate. Help them. If VCs in general aren’t making the effort to see beyond the accent, set the example and encourage others to follow.

This isn’t charity. That person with a strong foreign accent probably achieved a small miracle by getting from wherever that accent comes from to the point where they are making a YC pitch in the Valley… in English.

Courage and tenacity — probably important qualities in a startup CEO.

Paul Graham is very highly regarded in the startup and tech community. People look up to him and take his lead. If the quote above is really his, I wish he would elaborate. If it isn’t, I hope he disowns it and explains what he was really getting at.

This post has been updated from the original, more emotional version.

A Tale of Two Tiers: The End of Premium Display Advertising as We Know It

For a while now advertisers and publishers have thought of display ads in two main buckets: premium and remnant. Premium campaigns were sold to the advertiser, either directly or through a network, by a salesperson. The advertiser would be sold a specific number of impressions during a specific time period and they would theoretically have some control over where the ads would appear — on which websites and sometimes even alongside which content.

But publishers were always left with some unsold impressions and would typically sell them to a network like Google AdSense as remnant inventory. On a good month the impressions sold as remnants would be single digit percentages, but on a bad month they could be a much larger share of the overall number of impressions.

The canary in the premium advertising coal mine was the massive difference between the CPM (cost per thousand impression) rates that a publisher could charge for their premium and remnant inventory. In my experience the difference could be an order of magnitude.

This screamed two things:

  1. Benefits that premium ads offer to advertisers would have to be big enough to support this level of price discrimination between premium and remnant inventory.
  2. Remnant inventory represented a big opportunity for arbitrage, so it was inevitable that new players would enter the market with ad products that packaged so-called remnant inventory in interesting ways.

When I started seeing ads from our premium advertisers also appearing on youlookfab.com via AdSense, I realized that #1, the price discrimination, was getting harder to justify. Identical creative, appearing in the exact same ad position on the same day. Just 10x cheaper.

Then came #2 — the rise of the machines — ad exchanges that to a large extent automate the matching of ad inventory supply and demand. Although the remnant inventory for an individual publisher was often a small percentage of their overall inventory, the aggregate remnant inventory across thousands of sites represented a lot of eyeballs. And at a much lower cost per eyeball.

Not only that, but purchasing ads from an exchange involves no sales people and no traditional sales cycle. Want a few million impressions for your ad during the first week of May? Bam. You got it. Sure, you don’t know exactly where those ads are going to appear, but the buy took you 5 minutes and cost a fraction of the price.

Of course, the more overall inventory that is sold programmatically, the better the average “quality” of those impressions and therefore the better the value proposition that programmatic buying offers. So we can expect ad exchanges, demand side buying and their ilk to accelerate their inroads into the premium market. In fact, this process is already quite far along. Case in point is a company like Federated Media, which was one of the pioneers of high touch premium campaigns and conversational media, but through their acquisition of Lijit they have pivoted to make programmatic buying of ads a central part of their offering.

Although companies like FM are still doing high touch, premium campaigns, they are only doing them with very large companies that have very large budgets. For these campaigns the economics of a costly sales cycle still make sense. The question is, for how long?

It is tempting to view this as yet another example of the innovator’s dilemma, but I don’t think all is lost for premium, bespoke campaigns. Yes, the pendulum will overshoot in the direction of programmatic buying, and the days of easy CPMs are over, but there will be a new normal where both high volume programmatic buying and high margin premium campaigns co-exist.

Already there are innovators moving to fill the gap left by larger networks that now only focus on campaigns with 6 figure budgets. Nativ.ly, founded by ex-FM veterans, is one example of a company looking to define new kinds of carefully thought out, high value campaigns for advertisers that want more precision and more care taken with their brand. And opportunities to advertise in more creative ways than an IAB banner or the latest frightening ad unit that takes over your browser window. Companies like Nativ.ly look more like boutique agencies than ad networks. After all, to make the economics of the sales cycle work at a smaller scale they need to be adding a lot of creative value 2.

The key question: How will boutique networks and the niche publishers they partner with quantify the value that premium campaigns can offer to a brand? Intuitively, we all know it is there — we despair of the dumb banner advertising we see all over the web and laud campaigns that are creative, targeted and intelligently tied into social media — but how will we measure awareness and brand uplift that doesn’t translate into immediate business? “We know it when we see it” still isn’t an effective way to present ROI.

  1. “Native” is the buzzword that has emerged to describe campaigns that are more, well, native to the places they appear. The canonical example is the ads alongside search results in Google or Bing. But for publishers it is about getting advertisers closer to the editorial content. If the potential conflict of interest challenges are negotiated properly, this can be very powerful. Advertisers can associate themselves with specific content and specific publishers that reflect their brand values. And their message appears in a context that makes sense. However, I hesitate to use the term “native” to describe what a company like Nativ.ly is doing because the now ubiquitous “You might also like” or “Also on the web” sponsored links under articles – mostly automated, sometimes annoying and often spammy – are also referred to as native advertising.

Privacy on iOS 6 is a Wake Up Call for 3rd Party Web Analytics Tools

There is much discussion of the way that the goals of advertising and privacy are at odds. But advertising is not the only casualty when browser makers clamp down their privacy settings. Another big category that this will affect is 3rd party analytics tools. Google Analytics (GA), for example, is an extremely powerful (and free) way for website owners to capture aggregate information about visitors in order to figure out what is working and what isn’t.

This is what I was doing this morning — browsing through our sources of traffic in GA — when something jumped out at me. The direct visitors to our site had started climbing rapidly in mid September, faster than any other category. This is strange, because you would expect that “direct” visitors, who have typed your address into the browser address bar, to grow more slowly than “referral” visitors, who came by clicking a link to your site, or than “search” visitors, who found your site using a search engine.

Digging a little deeper, I found that the increase was strictly among new visitors. This might be explained by press coverage in a printed publication or a TV mention, situations where someone heard about us but didn’t have a link to click. But I didn’t know about any recent coverage, and this sort of influx of visitors is normally a spike, not sustained growth.

What had changed after September 18 to catalyse this growth? Mid September… My conscious brain was working through any changes we had made on the site during the last month when my subconscious brain tapped it on the shoulder and said: “iPhone 5?”.

Of course. The first people who ordered the new model started receiving them on September 20. And I could quickly confirm that all the additional direct visits from new users were people using the version of Safari in iOS 6.

Still, what exactly was going on? I know there are some privacy settings changes in iOS 6, so my first thought was that Google Analytics was no longer able to distinguish between new and returning visitors. Indeed, the pageviews per user did seem to be dropping.

This didn’t account for the difference though. First, the decline in pages per visit seemed to start earlier than September 20. And second, returning visits from iOS 6 did not go to zero, so clearly GA was able to detect return visitors in some cases. It was something else.

Now that I knew what I was looking for — something in iOS that was impacting analytics — it was only a short search before I found the answer in an article by Danny Sullivan (Search Engine Land). According to Sullivan, it is indeed privacy related, but the issue is that iOS is not allowing Google Analytics to see the referrer (the bit of info that tells it where you came from) when you search from the search box in the Safari browser. (For the nerds, Safari is sending the searches through Google SSL search.)

The analytics for iOS in particular supports their theory. Right around September 20, iOS traffic sources start to shift from Google to “direct”. The rate is probably reflecting the penetration of iOS 6.

My take-away from all this is that the move towards privacy might have a big impact on 3rd party analytics services. Not only GA, but any other service that is running on someone else’s server and gathering info about your visitors via a tracking pixel or JavaScript that you put in your pages. In other words, all of them.

Now might be the time to invest in self-hosted solutions. Several years ago it was all about self hosted analytics, then Google bought Urchin and the move towards 3rd party hosted analytics began. Privacy could put a stop to all that and take us back to the days of running something like Urchin on our own servers. Although losing the power of GA would be sad, a resurgence of self hosted solutions would be awesome. If you have ever tried to find a good and affordable analytics tool to run on your server, you’ll know what I mean.

There Will Be No Accounting for (Other People’s) Taste in Tablets

For more than a decade, legions of Apple fans have been wondering how it is possible that people will choose a Windows PC over a Mac. After all, user experience is critical, right? And there is no question, at least in the mind of a Mac user like me, that Apple’s neatly integrated hardware+software experience is vastly superior to the franken-machines that Microsoft and its OEMs conspire to put in Best Buy.

Yet people bought, and are still buying, PCs in far greater numbers than Macs.

It must be the file formats, we say. Microsoft Office dominates word processing and spreadsheets and people need to share those documents. Yes, there is a pretty good version of Office for the Mac, but it is never quite as good as the Windows version.

It must be the Windows hold of the enterprise market. It is much more convenient to have a computer at home that is similar to the one you have at the office.

It must be the ubiquity of Windows. The fact that so many people have Windows PCs makes it much easier to find help and advice.

These are all true, but there is another factor that is much harder for Apple fans to understand: Not everyone sees the beauty. We’re wine connoisseurs watching, with condescending bewilderment, as the rest of the party happily drinks $8 box wine.

Sure, the simplicity of Apple design makes a huge difference for users in the early stages of a new concept’s introduction. What the Mac was to the GUI, the iPod was to MP3 players, the iPhone was to smartphones, and the iPad is to tablets. Apple does a fantastic job of introducing the world to these complex new product categories. But as we as a society begin to get comfortable with the concepts, we can tolerate more complexity. Simplicity is always important, but it becomes relatively less important over time. Other things — like “is it the cheapest one”, or “does it come in neon pink”, or “does it have a USB port” — become more important.

We thought people were choosing the Apple product because of its elegant simplicity, but most of them were really just choosing the simplicity. And to someone who doesn’t resonate with Apple’s aesthetic sensibilities, the elegance becomes a constraint when Apple prioritizes it over things like extra ports, customizability and expansion options.

So it will be with tablets. And while it is too early to tell whether Surface the product will be successful, Surface the concept of a credible, non-Apple iPad alternative is inevitable. And who better to deliver it than Microsoft, a brand on more of the world’s computers than any other.

Ha ha, we scoff. Who wants to do Microsoft Office on a tablet? Office is boring. And tablets have a completely different use case to laptops. Who would want one to run full Windows?

Answer: Lots of people. People with different priorities, working different jobs, living in different countries. People we don’t quite understand.