Research

Is this your year to invest in big data?

By Geoffrey Moore, Apr 05, 2018

Everybody is talking about it. Mine your data for insights. Use your digital exhaust to feed machine learning programs to develop algorithms that will transform your productivity. Amazon does it. Google does it. Microsoft does it. All three offer cloud services to help you get started. Shouldn’t you be doing it too?

To answer this question properly, I think you have to answer the following question first:

Does your company have, or will your customers grant you, proprietary access to log data containing signals that can be extracted and used to predict outcomes and prescribe actions that have material consequences?

There’s a lot to unpack here, so let me take things phrase by phrase.

Proprietary access means that you get to see data either exclusively or at least before others do. You do not have to own the data, but you do have to secure the right to process it as you see fit. Otherwise, there will be no barrier to entry to sustain whatever competitive advantage you create.

Fair enough. Say we do, or say our customers will grant us such access. Now, what do we mean by log data? Well, what we are not referring to is transaction data, the stuff that Systems of Record and Systems of Engagement store in relational databases. That data we have had access to for decades. Instead, we mean the unending streams of digital signals computers record whenever they are in operation—click streams from web sites, sensor readings from industrial machinery, network traffic from routers, log-ins and badge-ins at work, GPS readings from our cars and trucks, meter readings from the public utility—in short, the myriad of data streams that collectively go by the name of machine data.

Why would we care about such data? Because it records the actual behavior of people and things in the world, not what they said they would do, not what they were programmed to do, but what they actually did. Prior to digital logging, behavior that was complete basically became invisible. It could only be reconstructed through testimony. But now, by studying digital logs, we can see what actually happened, detecting patterns and inferring causes that were never explicitly declared or perhaps even intended but ended up shaping outcomes nonetheless.

Although we have had such log data for many decades, traditionally it was used for forensic purposes primarily, something like the flight recorders on airplanes accessed after a crash. Unless there was an incident, there was just too much data, the overwhelming amount of which was simply noise. Yes, there are likely to be signals in there somewhere, but not any that human beings can find. Machine learning systems, on the other hand, can—hence all the interest in data science and data scientists. However, until quite recently, the cost of aggregating, storing, and processing data on an ongoing basis at this scale was simply prohibitive. Well, not any more. It still isn’t cheap, but it is no longer prohibitive. It all depends on the size of the prize.

Specifically, wherever decisions are being made probabilistically, and especially when they are being made in real time, any algorithm which increases the probability of success is likely to deliver a material reward. This has already been demonstrated in a variety of domains, including high-frequency trading, digital ad placements, cyber-security, application performance management, fraud detection, and product recommendations. In each case, the higher the volume of transactions at stake, the smaller the improvement in probability is needed to create a meaningful reward. Even a click-through rate improvement of less than 1% is significant when the volume of ad placements is in the millions.

These, of course, are the known applications, the ones already exploited by the first movers. But we should understand that they have barely scratched the surface. The full range of opportunity is better captured in a starkly simple claim I read recently in Autodesk’s visionary book, The Future of Making:

When an industry becomes computable, everything changes irrevocably.

I love this word computable. What happens when you can compute manufacturing using 3D printing? When you can compute transportation with self-driving vehicles? When you can compute customer service with chatbots? When you can compute energy usage and space utilization with smart buildings? When you can compute K-6 education with personalized curricula?

Granted, many of these things are still far in the future, but there is no question that when they arrive, they will change things irrevocably. This is domain of digital transformation, where computation has migrated from being about the business to actually being the business. If your company can leverage proprietary access to log data to transform your own operations or those of your customers into a computable state, then now is indeed the time for you to invest in big data.

One caveat before closing. A key qualification for all that has been said above is that the decisions at stake are being made probabilistically. When, by contrast, decisions are being made deliberatively, especially over an extended period of time and incorporating multiple points of view, that means the operations in question are not yet computable, and the value of processing log data diminishes considerably. Yes, it can still provide insights, but such insights get subsumed into a much broader set of exchanges, and it becomes difficult to attribute definitive economic value to them. We have moved from the world of must have to nice to have, and the risk/reward ratio has turned upside down.

It is critical, therefore, that planning teams distinguish between these two situations. In an earlier blog, I suggested we call the domain of probabilistic decision-making Operational Intelligence (OI) and that of deliberative decision-making Business Intelligence (BI). In that context, log data processing is fundamental to OI and tangential to BI, the latter getting far more of its information from more structured data. The point is, while it is now feasible to collect and process big data, it is still expensive, particularly when you take into account the time, talent, and management attention required not only to extract the signals from the noise and convert them to actionable algorithms but also to reengineer the underlying business processes and enterprise organizations to take advantage of them. If you are not getting an OI return, if all you are getting is just BI insights, it is not yet time to take the plunge.

About the author

Author photo

Geoffrey Moore is an author, speaker, and advisor who splits his consulting time between start-up companies in the Mohr Davidow portfolio and established high-tech enterprises, most recently including Salesforce, Microsoft, Intel, Box, Aruba, Cognizant, and Rackspace.

Moore’s life’s work has focused on the market dynamics surrounding disruptive innovations. His first book, Crossing the Chasm, focuses on the challenges start-up companies transitioning from early adopting to mainstream customers. It has sold more than a million copies, and its third edition has been revised such that the majority of its examples and case studies reference companies come to prominence from the past decade. Moore’s most recent work, Escape Velocity, addresses the challenge large enterprises face when they seek to add a new line of business to their established portfolio. It has been the basis of much of his recent consulting. Irish by heritage, Moore has yet to meet a microphone he didn’t like and gives between 50 and 80 speeches a year. One theme that has received a lot of attention recently is the transition in enterprise IT investment focus from Systems of Record to Systems of Engagement. This is driving the deployment of a new cloud infrastructure to complement the legacy client-server stack, creating massive markets for a next generation of tech industry leaders.

Moore has a bachelors in American literature from Stanford University and a PhD in English literature from the University of Washington. After teaching English for four years at Olivet College, he came back to the Bay Area with his wife and family and began a career in high tech as a training specialist. Over time he transitioned first into sales and then into marketing, finally finding his niche in marketing consulting, working first at Regis McKenna Inc, then with the three firms he helped found: The Chasm Group, Chasm Institute, and TCG Advisors. Today he is chairman emeritus of all three.


Tags

Unbiased Actionable Insights

Accelerate your organization’s journey to analytics maturity

Get the data sheet to learn how the Research & Advisory Network advances analytics capabilities and improves performance.

Download data sheet »

Become a RAN Client

Get answers to your toughest analytics questions with IIA's Research & Advisory Network.

»