Why X Doesn’t Matter

One of the many things low on my to-do list (right down there with blogging more often) is to do more to promote the gospel and reasoning of Nick “IT Doesn’t Matter” Carr (he uses the less definitive “Does IT Matter?” when trying to sell books to swing audiences).  His argument is that since technology is now widely available to any company, there is no sense using it for competitive advantage.  This reasoning drives technology people crazy and often results in incoherent sputtering in response.  

 

My view is the democratization of technology is something to praise, not bemoan.  I’m not sure anything provides sustainable competitive advantage over the long term and there isn’t a lot of history to suggest technology ever did.  Technology, like anything else, is a “what have you done for me lately” input.  But I’ll leave it to others to have that argument.  I’m more interested in how this reasoning might be applied in other areas.  I’ve been toying with doing a couple articles for the Harvard Business Review in this vein.

 

One is “Brains Don’t Matter”.  After all, everyone has a brain, so why bother to think?  If you come up with a good idea, someone else will see it and copy it, so why waste the time and energy?

 

Another is “Food Doesn’t Matter” for the restaurant business.  Everyone has access to the same raw ingredients, so why bother trying to differentiate yourself on the quality of the food you prepare.  Instead, focus on more sustainable advantages like parking, cushy chairs, napkin quality and maybe a nice view.  You can outsource the food preparation to a local pizza place that has economies of scale.  And in the long run, of course, visionaries tell us we’ll move to the Utility Food™ model where every restaurant has a pipe that just pumps in Soylent Green or whatever.  You may laugh, but this Utility Food™ model is being tested today.  

 

(That was a long way to go to get that last link in, but the alternative post of “My jokes are coming true” needed more context).

Utility Computing or Futility Computing?

Perhaps the network is not in fact the computer.  The Register reports that after 14 months Sun still doesn’t have any customers for their $1/CPU/hour computing service.

 

It is easy to pick on Sun, but they’ve made the same fundamental error that everyone touting utility, grid, on demand and other flavors of buzzword computing have made: the economics just don’t work in the wide area.  Sun seems to get the economics wrong more often than most (e.g. buying a tape company just as disk has become cheaper than tape), but they all suffer from the mainframe conceit that it is better to centralize processing even if that means the processing ends up far away from the task at hand.  That was once a good idea but not any more.  Timesharing has lost some of its pizazz.  Processing is relatively abundant (see Jim Gray’s Distributed Computing Economics paper again).  You better be doing a lot of processing to make it worth the round trip.  This is why it makes sense to interact with remote, high value services as opposed to shipping things out to be processed elsewhere.  The software behind the service actually does something useful and is close to its own data.  In the meantime there is more and more power on the edge of the network to consume and remix those services.

 

Sun says "In the long run, all computing will be done this way."  Keynes says we’re all dead in the long run.  Depending on your choice of strategy, the long run may come to pass sooner rather than later.

The Gilder Fallacy

Rich Karlgaard at Forbes has a recurring meme he calls the Cheap Revolution to explain the impact of increasing technological abundance.  He revisits it again in the latest issue of Forbes:

 

“The Cheap Revolution rolls on, making new billionaires even as it collects more old scalps each year. Step back and look at what’s happening in technology. Computation gets twice as fast every 18 months, and at the same price point. Storage evolves faster–every 12 months. Communications is fastest of all, doubling every 9 months.”

 

He is repeating an assumption that underlies an awful lot of conventional wisdom (and investment dollars) today: the idea that network bandwidth is growing relatively faster than computation or storage.  I give George Gilder credit for popularizing this notion in his Microcosm and Telecosm books and various articles.  The only problem is he was wrong when it comes to the real world.  So giving credit where credit is due, we call this the Gilder Fallacy.  Yes, in terms of relative increase in technical capability, communications are outstripping computation and storage.  George can explain in florid prose how ever more photons can be crammed down a strand of glass.  But unlike computation and storage, the customer’s cost of communications over public networks doesn’t mirror the rate of improvement in the lab.  In fact it badly lags the rate of improvement of computation and storage.  The technology improvements don’t get passed on in wide area networks (i.e. the Internet).  Just look at your broadband bill – my guess is it doesn’t halve every nine months.  Nor probably does your bandwidth double every nine months.  Some combination of telco pricing practices, municipal tax policies and last mile issues ensure the savings are seldom passed on to you and me, and certainly not at any rate approaching Moore’s Law.  You have to build your own network to ride the underlying improvement curve.

 

Getting the underlying economics right provides an incredible tailwind.  People who grokked Moore’s Law two decades ago ran their businesses better than those who didn’t.  It helped you understand what was important and what were only temporary obstacles.  We’re moving from a world of scarcity to one of abundance in many areas, but you still need to understand relative abundance.  The Gilder Fallacy leads you to believe computation is the relatively scarce resource therefore you waste bandwidth to optimize on computation.  But the underlying economics dictate the opposite.  Despite being refuted years ago, I am amazed at how many bets in the industry continue to suffer from the Gilder Fallacy.

 

Jim Gray has a brilliant paper on Distributed Computing Economics that delves into the implications – in particular:

 

“Put the computation near the data. The recurrent theme of this analysis is that “On Demand” computing is only economical for very cpu-intensive (100,000 instructions per byte or a cpu-day-per gigabyte of network traffic) applications.”

 

“If telecom prices drop faster than Moore’s law, the analysis fails.  If telecom prices drop slower than Moore’s law, the analysis becomes stronger.   Most of the argument in this paper pivots on the relatively high price of telecommunications.  Over the last 40 years telecom prices have fallen much more slowly than any other information technology.  If this situation changed, it could completely alter the arguments here.   But there is no obvious sign of that occurring.”

 

Again, this applies to public networks.  Build your own network and you can come much closer to harnessing the underlying rate of technology improvement.  This fact also will help perpetuate a distinction between what you might do on a corporate network and what you might do over the Internet.

 

This has big implications for grid computing, On-Demand, “software as a service” and other various industry enthusiasms.  Many of them set sail thinking they had a tailwind when in fact the headwind will only grow fiercer over time. 

 

Not only is the power on the edge, but the edge is getting more powerful on a relative basis.

The Power of Ecosystems

A nice piece in Fast Company by John Sviokla on the power of ecosystems.

The interesting question is whether you can plausibly make the transition from a closed/vertical industry model to an ecosystem/horizontal industry model.  It is hard to think of examples of companies successfully making that transition (got any candidates?).

The revenue hit when you move from owning the whole pie to accepting a smaller part of what you expect to be a much bigger pie is not a leap many companies are willing or able to make.  This is especially true of hardware companies trying to become software companies because the existing hardware revenue you give up (or at least put at risk) typically dwarfs the available software revenue in the short and even medium term.  It is also true of companies trying to move from low volume, high cost models to high volume, low cost models.  The problem is to get to high volume, low cost, there is usually a low volume, low cost waypoint. The radical change in cost structure required is just too painful for most companies to contemplate.

Mind the Bathwater

The unanimous Supreme Court ruling on Grokster bodes mighty poorly for companies whose primary purpose is to facilitate music piracy through peer-to-peer file sharing.  My out-on-a-limb investment advice: this might not be the best place for your retirement nest egg.  Getting paid to propagate spyware while encouraging people to pirate music never was a great business model and the legal risk now probably outweighs any upside.

 

I don’t have a strong opinion on whether this ruling will have the “chilling’ effect on innovation that some have raised.  In general I lean to the view that sanity will prevail and we live in a world that can reasonably weigh the positives and negatives associated with any new technology (rule of thumb: make sure the legitimate uses outweigh the illicit ones and you’re probably ok).

 

I do however hope that peer-to-peer doesn’t get relegated to the dustbin of history along with Grokster and its cohorts.  Peer-to-peer has gotten a bad name by association but there is much more to peer-to-peer than just pirated media.  It is important as an application topology.  Nodes on the Internet can talk directly with one another and don’t have to be mediated by a server or other central control point.  The Internet can be a pipe as opposed to a destination.  Applications like Groove and Skype show the positive side of peer-to-peer.  As developers continue to refine firewall and NAT traversal which is the biggest challenge with peer-to-peer applications today, you’ll see more and more of these direct connections. 

 

Why for example does photo sharing need to go through a server?  If I want to order some prints or share them very broadly, sure, a server model makes sense.  But when it comes time to share photos of the kids with grandma, there is little reason not to go direct.  In a world of always-on broadband connections and really powerful clients, we simply don’t always need a server.  There are scenarios where you must have a server or will want to mix centralized and peer-to-peer topologies, but the reality is server hosting is still expensive.  Server hardware and software are relatively cheap – but bandwidth in big dollops is very expensive as are the people required to manage the servers (there is still tons of opportunity to simplify and automate server management).  Some of the biggest advocates of going peer-to-peer and getting traffic off servers are the people with the biggest datacenters – they understand these costs first hand.

 

Hopefully the obituaries being written this week put peer-to-peer in the proper (that would be not dead) perspective.

From Browse to Search to Subscribe

The grief for not having posted again has already come and gone, but I’m still well within my goal of not measuring posts on geologic time.

 

There is an idea coming into focus for me that is kind of obvious when you look at the specific instances but I’m not sure the general impact has sunk in yet.  There is a user paradigm evolution occurring around us that is turning the Internet in its head. 

                                

The browser jumpstarted mainstream Internet use and made browsing the user paradigm.  You could type in a URL or follow links and it worked pretty well as long as you knew where you wanted to go or someone else had the foresight to provide a link to where you might want to go.  But this approach couldn’t keep up with the hypergrowth of the Web.  Even if you surfed all day long, the unknown was growing exponentially faster than the known.

 

Enter the search engine.  Instead of being limited to what you knew about or could find a link to, search engines allow you to query across millions of Web sites and billions of Web pages.  Search makes vastly more of the Web accessible, but it too has limitations.  Simple queries return preposterous quantities of links (as opposed to answers) while complex queries go unanswered.  Personal relevance and understanding user intent are, to be charitable, in their infancy.

 

Both browsing and searching are about discovery, but have little to do with consumption. Discovery is work. You navigate and enter queries.  Consumption is when you get something valuable.  Browsing or searching by themselves are just a means; the end is consumption.  The way these terms get used everyday reinforces this gap.  “Can I help you?”  “No thanks, I’m just browsing.”  “Did you find what you are looking for?”  “Nope, I’m still searching.”

 

Subscribe to a New Approach

 

The subscribe model allows software to act on our behalf and significantly improve consumption.  RSS is obviously the first successful taste of the subscribe model (we’ll conveniently forget the whole "Push" episode of the late 20th century).  Subscribing doesn’t replace browsing or searching any more than searching replaced browsing.  Both will remain common activities with continued growth and innovation.  They’re probably how you will find most of the things you subscribe to.

 

But when you subscribe, software does things on your behalf.  It automatically grabs stuff you’ve expressed an interest in (and in the future software smarts probably will do a pretty good job of finding things you might have an interest in).  Discovery and navigation disappear once you’ve found something worthy of ongoing attention.  Software bridges delivery with the consumption experience and lets you interact on your terms.  With RSS, instead of having to visit a wide range of web sites in order to read blogs, people can subscribe using newsreader software that will retrieve and present those data feeds.  You can read them at your convenience.  They are available offline so you can catch up any time.  You can sort or search them.  Most newsreader software is pretty simple today as blogs are designed to be read by people.  But newsreaders will get smarter and enhance the consumption experience.  As we grapple with lots of feeds, it would be great if software could help prioritize.  Or make it easier to read a lot of content on-screen.

 

Personal video recorders like TIVO are another early example of the subscribe paradigm.  Based on your preferences, they grab video streams out of the sky or off a coaxial cable.  You can then watch it whenever you want.  And you can easily omit the bits you don’t want to watch (so you can fast forward to the ads).  Or watch some bits over and over (attention Janet Jackson fans).  You’re in control of how you consume it.  PVR software can make suggestions about what you might like to watch based on what your history and preferences.  In a world where storage is the hardware component with the best price-performance improvement trajectory, speculative caching of feeds seems like it will explode.  Who cares if you never look at many of the captured bits.  The convenience of what you do consume outweighs the cost.

 

Like blogs, TV is also designed to be consumed by human eyeballs.  But there are all kinds of interesting feeds that might be consumed by software, which then does something for us.  Software updates and patches.  Event calendars.  Business data.  Traffic flows.  Search queries.  Stock prices.  All can be subscribed to as feeds and then software can party on that information to our benefit – personalize, analyze, visualize, manipulate, aggregate, synopsize, prioritize, etc.

 

The subscribe model promises people a more valuable experience with much less effort. The old rallying cry of Information At Your Fingertips is no longer a dream.  We all have access to more information than we could ever possibly process.  The challenge now is sifting through oceans of information to get the right stuff and, equally important, then be able to take the appropriate action and do something with the information.  It kind of turns the way we use the Internet on its head and conjures up all kinds of cool ideas for software.