Evidence-based Decision Making

Conrad Taylor writes:

On Thursday 3rd of November 2016, about thirty people gathered at the British Dental Association to discuss the topic of ‘Evidence-Based Decision Making’, in a workshop-like session led by Steve Dale, who practises as an independent consultant as ‘Collabor8Now’.

The NetIKX difference

Before I give readers an account of the meeting, and some thinking about it, I’ll describe a few things that often make NetIKX meetings ‘different’ from those in other organisations devoted to information and knowledge management. This meeting was a good expression of those differences.

For one thing, NetIKX is not dominated by academics – most who come to the meetings work with knowledge and information in government departments, business corporations, agencies and the like. That majority is then seasoned with a sprinkling of consultants who work in those kinds of business environment.

Secondly, the pattern of most NetIKX meetings is to have one or two thought-provoking presentations, followed by discussions or exercises in ‘table groups’ (called syndicate sessions). This on average occupies a third of the time, followed by pooling of ideas in a brief plenary. That’s quite different from the pattern of lecture plus brief Q&A encountered at so many other organisations’ meetings.

When you combine those two features – the nature of the audience and the participatory table-group engagement – the Network for Information and Knowledge Exchange does live up to its ‘network’ title pretty well. The way Steve organised this meeting, with a heavier than usual slant towards table-group activity, made the most of this opportunity for encounter and exchange.

Setting the scene

Steve explained that he had already delivered this ‘package’ in other contexts, including for the Knowledge and Innovation Network (KIN) associated with Warwick Business School (http://www.ki-network.org/jm/index.php). We know Steve is also interested in the idea of ‘gamifying’ processes: he hoped the work he had prepared for us would be fun. There would even be an element of competition between the five tables, with a prize at stake.

Steve started with a proposition: ‘Decisions should always be based on a combination of critical thinking and the best available evidence’. Also, he offered us a dictionary definition of Evidence, namely, ‘the available body of facts or information indicating whether a belief or proposition is true or valid’.

The first proposition, of course, begs the question about what you consider to be the best available evidence – whose opinions you trust, for example. That, it turned out, was the question at the heart of Steve’s second exercise for us.

As for that ‘definition’, I have my doubts. It could be interpreted as saying that we start with a ‘belief or proposition’, and then stack information around it to support that point of view. That may be how politics and tabloid journalism works, but I am more comfortable with scientific investigation.

There are at least two ways in which science looks at evidence. If an explanatory hypothesis is being tested, the experiment is framed in such a way that evidence from it may overthrow the hypothesis, forcing us to modify it. And very often, before there is yet a basis for confidently putting forth a hypothesis, ‘evidence’ in the form of observed facts or measurements, and even apparent correlations, is worth taking note of anyway: this then constitutes something that requires explaining. Two cases in point would be field notebooks in biology and series measurements in meteorology.

Similarly, ‘evidence’ in a forensic investigation should float free of argument, and may support any number of causal constructions (unless you are trying to fit somebody up). That’s what makes detective fiction fun to read.

Certainly, in our complex world, we do need the best possible evidence, but it is often far from easy to determine just what that is, let alone how to interpret it. I shall end this piece with a few personal thoughts about that.

Correlation and causation

Steve’s following slides explored what happens when you confuse ‘correlation’ (things, especially trends, which happen within the same context of time and space) with ‘causation’. For example: just as Internet Explorer was losing market share, there was a parallel decline in the murder rate; from the start of the Industrial Revolution, average global temperatures have been trending upwards, closely correlated with a decline in piracy on the high seas. Did the former cause the latter, or the other way round?

Those, of course, are deliberately silly examples. But often, correlation may usefully give us a hint about where to look for a causal mechanism. The global warming trend has been found (after much data collection at an observatory in Hawaii) to correlate with an increase in the proportion of carbon dioxide in the atmosphere. That observation spurred research into the ‘greenhouse gas’ effect, helping us to understand the dynamics of climate change. As for the field of medicine, where causative proof is hard to nail down, sometimes correlation alone is deemed convincing enough to guide action: thus NICE recommends donepezil as a palliative treatment for Alzheimer’s, though its precise mechanism of action is unproven.

Data visualisation

Steve then moved the focus on to one particular way in which information claiming to be ‘evidence’ is shoved at us these days – data visualisation, which we may define as the use of graphical images (charts, graphs, data maps) to present data to an audience. He mentioned a project called Seeing Data, a collaboration between British and Norwegian universities, which is exploring the role of data visualisations in society (see http://seeingdata.org). According to this project, the key skills we need to work with data visualisations are…

  • language skills;
  • mathematical and statistical skills, including a familiarity with chart types and how to interpret them;
  • computer skills, for those cases where the visualisation is an interactive one;
  • and skills in critical thinking, such as those that may lead us to question the assumptions, or detect a ‘spin’ being put on the facts.

Steve showed a few visualisations that may require an effort to understand, including the London Underground ‘Tube map’ (more properly, a network diagram). Some people, said Steve, have problems using this to get from one place to another. Actually, a geographically accurate map of the Underground looks like a dense tangle of spaghetti at the centre with dangling strands at the periphery. Harry Beck’s famous diagram, much imitated by other transport networks, is simplified and distorted to focus attention on the ‘lines’ and the stations, especially those that serve as interconnectors. But it is certainly not intended as a guide to direction or distance: using it to plan a walking tour would be a big mistake.

One might therefore say that effective understanding of a diagram requires experience of that diagram type and its conventions: a sub-type of the factor (2) in the list above. Charts, graphs, diagrams and data maps are highly formalised semiotic expressions. Partly because of that formalism, but also because many visualisations are designed to support fast expert analysis, we would be wrong to expect every visualisation to be understood by just anyone. Even the experienced technicians who recently did my echocardiogram defer to the consultant cardiologist, when it comes to interpreting the visualised data.

Critical thinking in focus

For our first exercise, Steve wanted us to apply critical thinking to ten given situations, set out in a document, copies of which were shared with each table group. Five of these puzzlers were illustrated with a graphic. To prime us, he talked through a number of images. In one case, a chart indicating changing quantities over time, the vertical axis representing the quantities did not start at zero (a common phenomenon): it gave the impression of a large change over time, which wasn’t warranted by the data. A map of referendum voting patterns across the counties and regions of Scotland could skew one’s impressions owing to the vast area of the sparsely populated Highlands, Galloway etc., compared to the small but densely settled zones of Glasgow and Lanarkshire. Other examples illustrated problems of sampling biases.

The exercises were quite fun. One of my favourites, and it did bamboozle me, showed a side elevation and plan picture of a twin-engined Douglas Dakota cargo plane marked with loads of red dots. The accompanying text said that the RAF had responded to losses of their planes to German anti-aircraft fire by examining the ones which got back, to see where the damage had occurred. They had aggregated the data (that is what the red dots indicated) and analysed the diagram to determine where to apply protective armour. What we were supposed to notice was that clearly, as those planes had managed to return, being struck in those marked places was usually survivable. The fact that no such dots showed up on the cockpit or either engine was because strikes in those locations tended to be fatal.

I won’t go through all of the examples in the exercise. In one case we were supposed to analyse trends in death by firearms year after year, but the y axis had been inverted, turning the curve upside down. In another case, the y axis was organised by a geometric progression rather than a linear one (each extra increment represented a doubling of quantity). That was quite a weird example, but bear in mind that logarithmic scales are common in some scientific graphs – and are used appropriately there, and understood by their intended audience.

It was fun working with the team on my table. We were pretty good at identifying, in some cases, multiple points of criticism. That rather undermined our score, because Steve decreed there should be only one criticism per example, and his answers had to be regarded as the right ones for the purpose of the competition! But the real benefit was the process, analysis and discussion.

Whose evidence do you trust?

The second exercise painted the scenario of an Italian company developing software for the retail sector: the concern was to know whether introducing performance-related pay would improve productivity in the engineering teams.

Steve had concocted eight forms of ‘evidence’ from various sources: a senior external consultant who said ‘no, you need to develop the leadership skills of supervisors’; a trusted friend who pointed to a study from the London School of Economics; an article in the Financial Times; a Harvard study of productivity amongst Chinese mineworkers; various responses to the question posted on a Human Resources discussion forum; what the HR director thinks. There were also two bits of evidence closer to the company: data about discrepancies in performance between the existing teams, which seemed to indicate that the most productive teams were those with a high proportion of senior engineers; and information that the union representing most of the engineers had resisted previous attempts at performance-related pay differentials.

We were supposed to rank these inputs on the basis of how trustworthy we thought their sources to be; my table found it quite hard to avoid considering how relevant the offered evidence might be. For example, we didn’t think the circumstances of Italian software engineers and Chinese mineworkers to be remotely comparable. I found it interesting how many of us tended to regard people like consultants and top management as trustworthy, whereas, when employees’ union was mentioned, people said, ‘Oh, they’ll be biased’. There is obviously a lot of subjectivity involved in evaluating sources.

If one thinks more broadly of evaluating the relevance and validity of evidence on offer, it appears to have at least two components: the degree to which the experience or model offered has both internal coherence and similarity to the situation for which a decision is being sought; and evaluation of the ‘messenger’ bringing those ideas. Thus there is a danger that useful evidence might be disregarded because of bias against the source.

Personal reflections

This was certainly a lively and highly engaged meeting, and Steve must be congratulated for how he structured the ‘table work’. The tasks we were set may have been artificial, and I thought some of the conclusions reached could be challenged, but it made for a lot of discussion, which indeed continued when we broke into unstructured networking afterwards, with drinks and snacks.

Clearly, it is valuable to learn to be critical of data visualisations, especially now they have become so fashionable. Data visualisations are often poor because their creators have not thought properly about what is to be communicated, and to what kind of audience, or haven’t considered how these highly abstracted and formal representations may be misunderstood. (And then, of course, there’s the possibility that the purpose is deliberately to mislead!)

There is a whole different (and more political) tack that we could have explored. This was the last NetIKX meeting of 2016, a year in which we have witnessed some quite outrageous distortions of the truth around the so-called ‘Brexit’ referendum, to name but one field of discourse. More generally, the media have been guilty of over-simplified representations of many very complex issues.

This was also the year in which Michael Gove exclaimed that we’d had enough of the opinions of experts – the kind of attitude that doesn’t bode well for the prospect of ‘evidence-based government’.

In respect of Evidence-Based Decision Making, I think that to rise to urgent environmental, social, developmental and political challenges, we definitely need the best evidence and predictive modelling that we can muster. And whatever respect we as citizens have for our own intelligence, it is hubris to think that we can make sense of many of these hyper-complex situations on our own without the help of experts. But can we trust them?

The nature of that expert knowledge, how we engage with the experts and they with us, and how we apply critical thinking in respect of expert opinion – these are worthy topics for any knowledge and information management network, and not something that can be dealt with in an afternoon.

At the meeting, Dion Lindsay spoke up to propose that NetIKX might usefully find a platform or method for ongoing and extended discussions between meetings (an email list such as the KIDMM community uses is one such option, but there may be better Web-based ones). The NetIKX committee is happy to look into this – so I guess we should start looking for evidence on which to base a decision!

2 replies
  1. Edmund Lee says:

    Many thanks Conrad for a thought provoking summary of the session. You touch in your personal reflections on the importance of trust in the context of evidence-bases. I note that, shortly after the NetIKX session, and the election in the U.S., the Oxford Dictionary announced ‘post-truth’ as the word of the year for 2016. See https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016. I would suggest that the challenge for knowledge and information managers in a post-truth society is to address the issue of trust, and how to generate and sustain that, just as much as the discussion of how and what information to obtain, package up and promote in order to inform decision making.

    Reply
    • Conrad Taylor says:

      Thank you for your very pertinent observations, Edmund. You can probably tell I had these political matters on my mind and was even more forthright about them in an earlier draft, but was persuaded to tone down my leftyness! :-) BBC World Service has had several broadcasts recently focusing on the ‘fake news’ issue, people’s ability to sift the news content of social media, the not always truthful role of mainstream media, etc.
      ¶ I welcomed Dion’s suggestion that NetIKX could usefully host a discussion platform to explore ideas after and between meetings — this is just the sort of question which might be useful to pursue.

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published.