- Arguing the primacy of data or intuition is a fool’s quest – you need both
- To scale intuition beyond the individual analyst, business needs a systematic approach to synthesising the two domains of data interpretation
- Failing to do this can reverse progress in evidence-based management and open the back door to the intuition biases we were trying to improve on in the first place
A debate that is bubbling in digital, analytics and marketing communities centres on the question of whether big data heralds the end of intuition. Recently, some authors have taken the opposite perspective, focusing on the primacy of intuition over data. Both views are oversimplifications and miss the critical point.
We need a systematic way of leveraging and scaling intuition while correcting for the built-in bias that accompanies it. Only when we have done this will we be able to access the real value in data by driving beneficial change back into the environment.
Bi-passing the bias of intuition
The human brain is a wonderfully sophisticated pattern recognition and inference engine but it is not right 100% of the time. Systematic biases in judgement have been well documented in both the business and behavioural psychology literature.
Data practitioners often witness such blind spots first hand in the field. In many cases, analytics can cut through bias and bring transparency to opaque problems.1 After all, a core capability of a data scientist is to improve on commonly held heuristics, which would otherwise go unchallenged.
Senior executives have become familiar with the commercial value that evidence-based methods can deliver. Recently, one Australian bank reported a 40% acceptance rate on next best offers promoted through its online and call centre channels2 – a substantial improvement on the prior metric.
Another organisation started using rigorous multivariate testing on its e-commerce landing pages several years ago. These pages were designed by a leading creative agency to drive maximum impact and business results. However on testing hundreds of variations of the page ranging from very simple to complex, it was found that the simplest options often performed the best. They may not have ‘looked like’ great designs but they performed two or three times better.
Identifying and processing data signals
Data arises as signals in the environment. These signals may be supply chain costs, software development times, or customer conversion rates. If we have good sensors, we can capture some of this data and make use of it. To do that we have to process the data, make a decision, and generate an action that is effected back into the environment. The decision to act is critical, and is made by something we can broadly call a ‘decision agent’. That agent can have varying degrees of intelligence depending on the scope of the analysis, but generally, there are three modes that describe the way an agent operates:
If automated decisioning tools do not provide complete closure, are not available, or perhaps if the data is too complex to be processed easily, a large part of the decision can be human-driven. This is the case where asset managers make investment calls based on multiple data sources pulled through into portfolio analytics systems. Complex data is summarised in a series of dashboards to allow full use to be made of human pattern recognition and inference skills. In B2C environments, social service teams use dashboards surfacing complex social network data enabling them to collaborate and resolve customer enquiries.3
In these environments, decision-making can be automated under supervision. Human agents define the ‘condition-action’ rules, which get executed by software. Rules can then of course be optimised using feedback but wholly new rules are mostly generated in exogenous systems. This mode characterises anything as simple as a thermostat, to the more sophisticated CRM and next best offer applications.
Machine learning mode
In the third scenario, an intelligent agent makes decisions and is capable of learning from and optimising those decisions. Such agents can be employed if condition-action rules are unknown or the heuristics are known to be sub-optimal. In essence, this is the mode you apply when you don’t know what to do. The textbook example of this is Sebastian Thrun’s autonomous (Google) car.4 More quotidian commercial applications include search engines and image processing networks, surveillance agents on the lookout for insider trading, or game ‘bots’.
Real scenarios may combine all three of these applications but the individual modes themselves are useful components in overall system design. In each of these modes intuition has a different but very specific role. In the human mode, intuition, pattern recognition and inference play a pivotal role in decisioning and pushing change into the environment. In the rule-based mode, human intuition engages in development of the condition-action rules, while in the intelligent agent mode, human intuition plays a key role in design of the autonomous agent.
Using data to scale intuition
If we simply claim that ‘intuition is still important’ we open a back door that allows fact-based decisions to be overturned by fiction. Such ‘cop-out clauses’ provide the holders of biased views a rationale to continue on their trajectory in the face of overwhelming evidence.
A fertile environment in which this occurs today is in data interpretation and analysis of voice and free text data, where the immaturity of semantic ontologies can be used by operational units to undermine data-driven results.
Those that challenge should prepare their cases with the same level of rigour as the precept being debated. We can’t just claim the primacy of intuition and revert to gut feel. What we need is a structured and systematic approach to determining when we should let the evidence speak, and when we should mediate it with intuition. These poles need not be mutually exclusive but analysing and combining them in some of the ways outlined above can lead to a system in which data does not compete with intuition but scales it.
This article is by Jason Juma-Ross, former Digital Intelligence Lead for PwC.
1 For example, see Dash, Mihir, Dagha, Jay H., Sharma, Pooja and Singhal, Rashmi, An Application of GARCH Models in Detecting Systematic Bias in Options Pricing and Determining Arbitrage in Options (March 8, 2012). Journal of CENTRUM Cathedra: The Business and Economics Research Journal, Vol. 5, Issue 1, pp. 91-101, 2012. Available at SSRN: http://ssrn.com/abstract=2018422 and Wager, Stefan and De Treville, Suzanne, Constant Salvage Value Models: A Source of Systematic Bias in Predicting the Value of Lead-Time Reduction (January 15, 2013). Also available at SSRN: http://ssrn.com/abstract=2202422 or http://dx.doi.org/10.2139/ssrn.2202422
2 Banking Day News Bites, 9th August 2013, ‘Data Analytics boost cross-sell success.’
3 See excerpt from the Febuary 1st 2013 edition of Today Tonight on the NAB’s Social Media Command Centre, http://www.youtube.com/watch?v=PBf9TSiKvAw
4 Markoff, John (October 9, 2010). “Google Cars Drive Themselves, in Traffic”. The New York Times, http://www.nytimes.com/2010/10/10/science/10google.html?scp=1&sq=thrun&st=cse&_r=0&gwh=E8BC39DDB661C9CBCFE818FC63BDC72F