‘Big Data’ has a problem, and that problem is its name.

Dig deep into the big data ecosystem, or spend any time at all talking with its practitioners, and you should quickly start hitting the Vs. Initially Volume, Velocity and Variety, the Vs rapidly bred like rabbits. Now we have a plethora of new V-words, including Value, Veracity, and more. Every new presentation on big data, it seems, feels obligated to add a V to the pile.

Gartner stands by the original three, stating earlier this year that

Big Data are high-volume, high-velocity, and/or high-variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.

(my emphasis)

But by latching onto the ‘big’ part of the name, and reinforcing that with the ‘volume’ V, we become distracted and run the risk of rapidly missing the point. The implication from a whole industry is that size matters. Bigger is better. If you don’t collect everything, you’re woefully out of touch. And if you’re not counting in petas, exas, zettas or yottas, how on earth do you live with the shame?

From the outset, though, size was only part of the picture. Streams of data from social networks, traffic management systems or stock control processes raise a lot of challenges because of the speed with which data must be ingested, or the rapidity with which actionable decisions must be taken. Data volumes may only be a few gigabytes or – oh, the embarrassment – megabytes, but the challenge is still very real. Combining data of different types from disparate sources also creates opportunities. Video from traffic cameras, combined with the logs of pressure sensors beneath the roads, weather data from forecasters and historic records, and social networking comments about the smoothness (or otherwise) of the morning commute build a rich picture that no single source can provide. The initial data sets may be large, but the subset that is actually relevant to the problem being studied will often be far smaller. Variety is the challenge here, not volume.

Of all the Vs clamouring to join the initial three, the most compelling to me must surely be Value. What is the value of the insight offered by this data, regardless of how big it is, how fast it’s coming at me, or how many formats it comprises?

The correct interpretation of the right analysis, performed on the optimal set of data. Surely that’s what we should celebrate? Sometimes, certainly, that analysis will be performed on mind-bogglingly huge data stores. But sometimes it won’t, and the results can be just as valuable. Is it better to collect everything and then extract the gigabyte or two of data you actually need, or does it make more sense to know what you’re interested in and devise a collection strategy that gets you the right data in the first place?

Big Data is impressive stuff. This fledgling market segment is full of interesting companies doing remarkable things. The ability to hold genomes, city traffic systems, internet search logs, or complex financial models in memory and to manipulate them in order to derive insight in real time is stunning and transformative. The switch from highly structured relational databases to schema-less data stores creates new opportunities, and new challenges.

Let’s celebrate the value of data volume when it’s appropriate to do so, but let’s not create the patently false impression that big is best.

Ben Kepes and I jokingly used Twitter to announce a new company at last year’s Defrag. WeeData (‘wee‘ as in small, not the other kind) was to concern itself with everything that big data’s champions did not, and that’s a very large market indeed. We’re hoping that the people who rushed to sign up as customers, investors, and Board members got the joke…

Image of a snowflake by Flickr user ‘bkaree1