December 4, 2013

Big Data cannot seem to acquire enough attention for itself in the business press. Journalists rush to write stories whilst consultants such as McKinsey & Co. opine at length (for example “Mobilizing Your C-Suite For Big-Data Analytics”, McKinsey Quarterly, November 2013). Time perhaps to step back and strike a more cautious note about how the theme of big data will evolve.

A first point to be made is that data is not necessarily categorised by size. Data is not “big”, nor is it “small”. Big data as a term effectively refers to deploying analytical tools to draw meaning from the large mass of data – much of it unstructured, much of it captured in various forms of social media, and much of it woefully underutilised – stored in the data centres sprouting across the planet.

The second point, flowing from the first, is that Big Data is therefore a theme dependent on software and technology. Cheap storage has made big data a possibility. Something has to be used to gain access to, and extract meaning from, this mass of information. Cheap storage does not necessarily equate to cheap analytical tools.

In some ways this recalls an earlier phase of technology data mining: enterprise resource planning. This software – known as ERP – sought to capture, analyse and extract value from the data emanating from within a corporation. ERP is a big business, and it has had many benefits, but as any board member of any company that has adopted ERP will tell you, this software is more costly, more complicated and more disruptive to embed in a corporation than the vendors will have you believe. It brings benefits but also creates rigidities flowing from how data is entered, stored and manipulated by the system. In some business Cheverny has seen, ERP brings rigidities. The CEO of a vision of a Cheverny client described the implementation of ERP in his business unit (which had different characteristics from the other units in the company) as “transforming my walk in the woods by adding 20 pound weights into my knapsack”.

And thus to the technology underpinning big data. Big data analytics have the potential, so the journalists and technology seers tell us, to tap into the wisdom of crowds and create what amount to miracles.

This has the potential to produce poorly-defined and costly big data strategies. Certain things, such as deploying Hadoop to store formerly discarded data for longer at lower cost, are obvious low-hanging fruit. Real-time analysis of data prior to storage, business intelligence analysis of data (both unstructured and structured) and predictive analytics are, to pick just three examples, more complicated and costly undertakings.

As for market statistics, 451 Research reports that, in a 2012 survey of CIOs, big data projects represented just 3% of the data storage footprint. Move ahead a year to the 2013 survey and, after a veritable deluge of headlines, big data’s share of the data storage footprint is… 3%. Big data grew at about the rate of the market.

This is not to say that big data is hype. Rather it is to draw attention to the fact that big data can represent a major IT initiative that could, before it benefits the bottom line of the corporation, enhance the bottom lines of software vendors and IT consultants. Caution does not equal negativity. It may instead be argued that a cautious approach to big data will yield benefits as it enjoins senior management and boards to think about what they want to achieve, define an intellectual and strategic framework for the use of big data, test that framework against the capabilities of vendors, and deploy analytical tools in a cost-effective way driven by business objectives rather than consultant preferences.

For the software sector there will be a direct tension in the market between platform and best-of-breed strategies for big data, and this will be reflected in M&A activity.