Why X Doesn’t Matter

One of the many things low on my to-do list (right down there with blogging more often) is to do more to promote the gospel and reasoning of Nick “IT Doesn’t Matter” Carr (he uses the less definitive “Does IT Matter?” when trying to sell books to swing audiences).  His argument is that since technology is now widely available to any company, there is no sense using it for competitive advantage.  This reasoning drives technology people crazy and often results in incoherent sputtering in response.  

 

My view is the democratization of technology is something to praise, not bemoan.  I’m not sure anything provides sustainable competitive advantage over the long term and there isn’t a lot of history to suggest technology ever did.  Technology, like anything else, is a “what have you done for me lately” input.  But I’ll leave it to others to have that argument.  I’m more interested in how this reasoning might be applied in other areas.  I’ve been toying with doing a couple articles for the Harvard Business Review in this vein.

 

One is “Brains Don’t Matter”.  After all, everyone has a brain, so why bother to think?  If you come up with a good idea, someone else will see it and copy it, so why waste the time and energy?

 

Another is “Food Doesn’t Matter” for the restaurant business.  Everyone has access to the same raw ingredients, so why bother trying to differentiate yourself on the quality of the food you prepare.  Instead, focus on more sustainable advantages like parking, cushy chairs, napkin quality and maybe a nice view.  You can outsource the food preparation to a local pizza place that has economies of scale.  And in the long run, of course, visionaries tell us we’ll move to the Utility Food™ model where every restaurant has a pipe that just pumps in Soylent Green or whatever.  You may laugh, but this Utility Food™ model is being tested today.  

 

(That was a long way to go to get that last link in, but the alternative post of “My jokes are coming true” needed more context).

Utility Computing or Futility Computing?

Perhaps the network is not in fact the computer.  The Register reports that after 14 months Sun still doesn’t have any customers for their $1/CPU/hour computing service.

 

It is easy to pick on Sun, but they’ve made the same fundamental error that everyone touting utility, grid, on demand and other flavors of buzzword computing have made: the economics just don’t work in the wide area.  Sun seems to get the economics wrong more often than most (e.g. buying a tape company just as disk has become cheaper than tape), but they all suffer from the mainframe conceit that it is better to centralize processing even if that means the processing ends up far away from the task at hand.  That was once a good idea but not any more.  Timesharing has lost some of its pizazz.  Processing is relatively abundant (see Jim Gray’s Distributed Computing Economics paper again).  You better be doing a lot of processing to make it worth the round trip.  This is why it makes sense to interact with remote, high value services as opposed to shipping things out to be processed elsewhere.  The software behind the service actually does something useful and is close to its own data.  In the meantime there is more and more power on the edge of the network to consume and remix those services.

 

Sun says "In the long run, all computing will be done this way."  Keynes says we’re all dead in the long run.  Depending on your choice of strategy, the long run may come to pass sooner rather than later.

The Gilder Fallacy

Rich Karlgaard at Forbes has a recurring meme he calls the Cheap Revolution to explain the impact of increasing technological abundance.  He revisits it again in the latest issue of Forbes:

 

“The Cheap Revolution rolls on, making new billionaires even as it collects more old scalps each year. Step back and look at what’s happening in technology. Computation gets twice as fast every 18 months, and at the same price point. Storage evolves faster–every 12 months. Communications is fastest of all, doubling every 9 months.”

 

He is repeating an assumption that underlies an awful lot of conventional wisdom (and investment dollars) today: the idea that network bandwidth is growing relatively faster than computation or storage.  I give George Gilder credit for popularizing this notion in his Microcosm and Telecosm books and various articles.  The only problem is he was wrong when it comes to the real world.  So giving credit where credit is due, we call this the Gilder Fallacy.  Yes, in terms of relative increase in technical capability, communications are outstripping computation and storage.  George can explain in florid prose how ever more photons can be crammed down a strand of glass.  But unlike computation and storage, the customer’s cost of communications over public networks doesn’t mirror the rate of improvement in the lab.  In fact it badly lags the rate of improvement of computation and storage.  The technology improvements don’t get passed on in wide area networks (i.e. the Internet).  Just look at your broadband bill – my guess is it doesn’t halve every nine months.  Nor probably does your bandwidth double every nine months.  Some combination of telco pricing practices, municipal tax policies and last mile issues ensure the savings are seldom passed on to you and me, and certainly not at any rate approaching Moore’s Law.  You have to build your own network to ride the underlying improvement curve.

 

Getting the underlying economics right provides an incredible tailwind.  People who grokked Moore’s Law two decades ago ran their businesses better than those who didn’t.  It helped you understand what was important and what were only temporary obstacles.  We’re moving from a world of scarcity to one of abundance in many areas, but you still need to understand relative abundance.  The Gilder Fallacy leads you to believe computation is the relatively scarce resource therefore you waste bandwidth to optimize on computation.  But the underlying economics dictate the opposite.  Despite being refuted years ago, I am amazed at how many bets in the industry continue to suffer from the Gilder Fallacy.

 

Jim Gray has a brilliant paper on Distributed Computing Economics that delves into the implications – in particular:

 

“Put the computation near the data. The recurrent theme of this analysis is that “On Demand” computing is only economical for very cpu-intensive (100,000 instructions per byte or a cpu-day-per gigabyte of network traffic) applications.”

 

“If telecom prices drop faster than Moore’s law, the analysis fails.  If telecom prices drop slower than Moore’s law, the analysis becomes stronger.   Most of the argument in this paper pivots on the relatively high price of telecommunications.  Over the last 40 years telecom prices have fallen much more slowly than any other information technology.  If this situation changed, it could completely alter the arguments here.   But there is no obvious sign of that occurring.”

 

Again, this applies to public networks.  Build your own network and you can come much closer to harnessing the underlying rate of technology improvement.  This fact also will help perpetuate a distinction between what you might do on a corporate network and what you might do over the Internet.

 

This has big implications for grid computing, On-Demand, “software as a service” and other various industry enthusiasms.  Many of them set sail thinking they had a tailwind when in fact the headwind will only grow fiercer over time. 

 

Not only is the power on the edge, but the edge is getting more powerful on a relative basis.

The Power of Ecosystems

A nice piece in Fast Company by John Sviokla on the power of ecosystems.

The interesting question is whether you can plausibly make the transition from a closed/vertical industry model to an ecosystem/horizontal industry model.  It is hard to think of examples of companies successfully making that transition (got any candidates?).

The revenue hit when you move from owning the whole pie to accepting a smaller part of what you expect to be a much bigger pie is not a leap many companies are willing or able to make.  This is especially true of hardware companies trying to become software companies because the existing hardware revenue you give up (or at least put at risk) typically dwarfs the available software revenue in the short and even medium term.  It is also true of companies trying to move from low volume, high cost models to high volume, low cost models.  The problem is to get to high volume, low cost, there is usually a low volume, low cost waypoint. The radical change in cost structure required is just too painful for most companies to contemplate.

Mind the Bathwater

The unanimous Supreme Court ruling on Grokster bodes mighty poorly for companies whose primary purpose is to facilitate music piracy through peer-to-peer file sharing.  My out-on-a-limb investment advice: this might not be the best place for your retirement nest egg.  Getting paid to propagate spyware while encouraging people to pirate music never was a great business model and the legal risk now probably outweighs any upside.

 

I don’t have a strong opinion on whether this ruling will have the “chilling’ effect on innovation that some have raised.  In general I lean to the view that sanity will prevail and we live in a world that can reasonably weigh the positives and negatives associated with any new technology (rule of thumb: make sure the legitimate uses outweigh the illicit ones and you’re probably ok).

 

I do however hope that peer-to-peer doesn’t get relegated to the dustbin of history along with Grokster and its cohorts.  Peer-to-peer has gotten a bad name by association but there is much more to peer-to-peer than just pirated media.  It is important as an application topology.  Nodes on the Internet can talk directly with one another and don’t have to be mediated by a server or other central control point.  The Internet can be a pipe as opposed to a destination.  Applications like Groove and Skype show the positive side of peer-to-peer.  As developers continue to refine firewall and NAT traversal which is the biggest challenge with peer-to-peer applications today, you’ll see more and more of these direct connections. 

 

Why for example does photo sharing need to go through a server?  If I want to order some prints or share them very broadly, sure, a server model makes sense.  But when it comes time to share photos of the kids with grandma, there is little reason not to go direct.  In a world of always-on broadband connections and really powerful clients, we simply don’t always need a server.  There are scenarios where you must have a server or will want to mix centralized and peer-to-peer topologies, but the reality is server hosting is still expensive.  Server hardware and software are relatively cheap – but bandwidth in big dollops is very expensive as are the people required to manage the servers (there is still tons of opportunity to simplify and automate server management).  Some of the biggest advocates of going peer-to-peer and getting traffic off servers are the people with the biggest datacenters – they understand these costs first hand.

 

Hopefully the obituaries being written this week put peer-to-peer in the proper (that would be not dead) perspective.