Developing Effective Collaborative Knowledge Spaces

Conrad Taylor writes:

During 2017, which is a 20th anniversary year for NetIKX, a number of eminent speakers have been invited to lead meetings, speakers who for the most part have addressed NetIKX before. At the meeting on 18 May 2017 the speakers were Paul Corney and Victoria Ward.

Paul worked for 25 years in top management in the City of London financial sector (Saudi International Bank and Zurich Reinsurance), and for the last couple of decades has pursued a ‘portfolio career’ as a business adviser, facilitator and business coach, with clients in 24 different countries, including Iran, Saudi Arabia, the Gulf States and several African countries.

Paul is also a managing partner at the Sparknow consultancy, which Victoria Ward founded in 1997. Victoria’s background is similarly in knowledge management in the banking sector. Sparknow approaches organisational KM using narrative enquiry methods, and Victoria can list amongst her former clients, a number of banks, government agencies, museums and cultural organisations, the World Health Organisation and the British Council.

Recently, Victoria and Paul have been working with Clive Holtham of the Cass Business School on a project looking at how the arrangement of space impacts the working environment, and knowledge sharing within that. Paul has been conducting a kind of rolling survey across various locations around the world. We in NetIKX would be the latest to add our thoughts; and Paul intends to publish a report as the summation of this enquiry.

Points of view

Paul and Victoria set up an exercise in which the forty or so people present were clustered into three groups, out of earshot of each other. Each group was then quietly told what ‘profession’ we were to adopt as our collective point of view. We were to carefully make an assessment of the room we were in, from that assumed profession’s point of view, and list the positive, and difficult, characteristics of the room. Then each group, through a spokesperson, would tell the others about their list of good or bad room features – and the other groups were supposed to guess that group’s profession!

Group One commented that the room was very white and light; that there were lots of power points. They notes there was quite a lot of furniture, but the tables were on wheels and easily moved; there were lots of nooks and crannies, and lots of potential for mess around the coffee machines. We guessed that they were cleaners! Group Two mentioned the functional design of the room; the low ceiling and narrow form of the room; and lots of natural light from the windows. They were interior designers!

I was in Group Three and I think we had the most fun assignment. We talked about there being a couple of useful exits including a fire exit onto the roof (with presumably a way off that down to street level); various valuables conveniently next to the door, and some rather nice looking IT equipment; perhaps too many windows to be able to operate unseen, but no CCTV cameras. Yes, we were the thieves!

That was a nice, fun ice-breaker, but it was also more than that, as Victoria and Paul explained. Things (and not just rooms!) look different according to your point of view. They had come across this exercise used in a very large gathering at the Smithsonian Museum, and it’s especially useful to deploy at the start of a meeting when you want to draw attention to how a thing, or a situation, might look very different from somebody else’s perspective; something that’s good to bear in mind when there are many stakeholders.

Perspectives on Knowledge Management

KM, or Knowledge Management, has been described as ‘a discipline focused on ways that organisations create and use knowledge’. However, said Paul, beyond that there is no single accepted definition of what KM is, and it’s a field with no agreed global standards as yet.

Paul works around the use of knowledge within businesses. In his newly published book ‘Navigating the Minefield: a practical KM Companion’ he has suggested some characteristics which could define ‘a good KM programme’, such as it being in support of a business’s goals, and aligned with its culture. One focus will be operational, seeking to cut the costs of doing business (in money or time) – in practice, this is the focus of four out of five KM projects in business. Some projects look in more strategic directions, towards innovation and future business benefit.

One paradox of KM is that many of the people who practice it,do not stay long term with their employers, but move on every few years to a new appointment. This can lead to the pursuit of short-term goals and ‘fighting fires’ rather than more strategic approaches.

How can you effectively transfer knowledge from an expert, to a wider community? One positive story Paul shared was of work he did with Cláudia Bandeira de Lima, a leading authority on childhood autism and language development in the Portuguese-speaking world. The solution they devised was to run a foundation programme in the methodology, PIPA (Programa Integrado Para o Autismo), teaching courses and accrediting practitioners.

To represent another aspect of KM, at the personal level, Paul used an image of a laptop. If it is stolen or breaks down, you can replace the hardware and the applications, but if you haven’t backed up the documents which constitute your knowledge resources, ‘you’re toast!’ In doing knowledge audits, he and Veronica often found that sloppy attitudes to managing digital knowledge resources were rampant. An American survey from a few years ago estimated that a typical cost to replace someone in a senior business position is in the region of $400,000 – because when the previous incumbent moved on, they took their knowledge with them, and nobody had done anything to ‘back it up’.

Drivers and definitions

What is driving this thing called ‘knowledge management’? Why do people do it? To Paul it seems that a major driver within many businesses is compliance with regulations; and in a couple of years, when ISO standards for knowledge management appear, it will likely be about compliance with those standards as well. ‘Already today, if you want to sell a locomotive, one of the criteria is that you engage in knowledge management, and are seen to do so in a very professional way,’ explained Paul.

A second driver is around innovation and process efficiency; people believe there is benefit to doing things better with what you have. And a third driver is the management of risk. And then, in some organisations at any rate, there are concerns about using KM to support governance, strategy and vision.

Paul used a simple ‘three pillars’ diagram to represent the above scheme, but his next diagram, giving some examples of motivators/drivers for KM in the real world, was more complicated and so we reproduce it here as an image, with his permission. He represented five different industrial sectors as examples: nuclear power, the regulatory sector, government, industry and the services sector.

In the nuclear industry, a key driver is planning for the complex process of decommissioning power plants at the end of their lives. Companies anticipate that when that time comes, they will be downsizing, and at the same time losing people with maybe 40 years of nuclear operations and decommissioning.

In the regulatory industry (as Paul and Victoria found through interviews in Canada some years back), a large problem is around succession planning as people at the top retire. This is similar to the driver for Shell’s ‘ROCK’ programme (Retention of Critical Knowledge), which they called it ‘The Great Crew Change’.

In government, ‘flexible working’ has been invoked as a mantra. As Paul and Victoria discovered in interviews at the Department of Justice, a possible effect of this is the diffusion of specialist knowledge, as working becomes more generic. But if this can be managed, services can be improved.

Enhancing manufacturing processes is a key driver for industry. Paul described a recent three-year project he ran for Iran’s largest company, which aimed to shorten the time it took from coming up with an idea, to bringing it to market.

In the services sector, including finance and legal work, Paul said that the key to business efficiency is the effective re-use of precedent; it is in this sector that ‘artificial intelligence’ is likely to have the greatest impact.

At this point, Jonathan from Horniman Museum said that he could identify with all those drivers; but in addition, their raison d’être at the Museum is the curation and transfer of knowledge to the general public. Victoria responded that she’d done work about ten years ago for the Museum Documentation Association, funded by the London Development Agency, looking at what museums contribute to the knowledge economy of London. (The MDA shortly relaunched itself as the Collections Trust.) Two things which she remembers well from that project, which were not represented by Paul’s diagram, were:

As work gets more ‘nomadic’ and fluid, workers in various industries need somewhere they can think of as an intellectual ‘home’; for fashion, it would be the V&A. But when that MDA study was conducted, it seemed that museums were overlooking their rôle in relation to certain professional knowledge networks.

Knowledge Transfer Officers can play a vital rôle as a ‘cog’ or enabling connector, between the more entrepreneurial innovators in the organisation and those whose instincts are more curatorial and conservative; between ‘fast cultures’ and ‘slow cultures’, if you like.

Co-working hubs

Costs as a driver

Paul referred to a 2013 UK government report on Civil Service reform, authored by Andy Lake of Flexibility.co.uk and called The Way We Work: a guide to smart working environments [http://www.flexibility.co.uk/downloads/TW3-Guide-to-SmartWorking-withcasestudies-5mb.pdf]. This pointed out that the costs of providing working environments, both financial and environmental, can be reduced by switching away from dedicated desks and PCs, to co-working hubs.

Paul hasn’t worked in an office for 20 years – his ‘office’ is just wherever he finds himself with his Mac and his ’phone and other devices. Sparknow had an office for about five years, but the team decided it wasn’t necessary as long as people were disciplined in their collaboration practices. An executive recruitment firm in the USA has offered the opinion that perhaps by 2020, 40% of people will be mobile workers (I presume they refer to office jobs only), and that they will be freelancers. The benefits will be lower operating costs and higher productivity..

With Prof Clive Holtham, Paul has been advancing the view that as these developments occur, organisations will have to ensure that working environments – be they physical like co-working hubs, or virtual like arrangements for remote working – will be conducive for effective Knowledge Management. (This is what we were going to be looking at for the rest of the day.)

Finally, there was about 20 minutes left. Rather than having some kind of delegated report-back from the table groups, as many might do, Ron improvised another Gurteen-like feature: he asked the groups to reflect on the process they had experienced and what they had learned from it; then half way through, asked half the people at each table to move to the next table and continue the same discussion.

Victoria Ward noted that when they first started doing Knowledge Audits, people never included looking at their ‘knowledge spaces’. They would look at their networks, their disciplines, but it always surprised the clients when they were asked how the physical workspace functioned. When asked to conduct a Knowledge Audit, they now ask to take a look at such spaces, and ask questions about how they are supported.

Good and bad knowledge spaces

‘Did you know that the average desk is occupied for only 45% of office hours?’ asked Paul. That’s what Will Hutton noted for the Work Foundation in 2002, in a report for the Industrial Society. The foundation claimed that the workplace (the office workspace, that is – not fields and factories, shops and warehouses) was being reinvented as ‘an arena for ideas exchange’ and a drop-in workspace for mobile workers: a place where professional and social interaction can occur. And the foundation noted that workspaces which are badly designed or badly managed can actually damage the physical and mental wellbeing of staff.

The firm of Ove Arup believes that the future of (office) workspace will be a network of locations – many of them on short leases or even pay-as-you-go, shared spaces rather than highly ‘territorial’ ones. Also, they believe there will be a corresponding flexibility in working interactions, operating across both physical and virtual environments.

The Edge. In January 2017, Paul was helping to run some events around the KM Legal conference in Amsterdam. At a Smart Working summit in 2016, Paul had heard of an amazing office building in Amsterdam called ‘The Edge’, so on this trip he made a visit to the place, and was shown around by the architect and the building manager. The building’s developer was OVG Real Estate and the design was by London-based PLP Architecture. The building’s main tenant is the consulting firm, Deloitte. There is a video about the place on YouTube – at https://youtu.be/JSzko-K7dzo – and Paul showed it to us. (There is also a Bloomberg article at https://www.bloomberg.com/features/2015-the-edge-the-worlds-greenest-building/)

The video claims that The Edge is ‘certifiably the greenest building in the world’, with its extensive use of natural light, and harvesting of solar power (the building is a net producer, not consumer, of electricity). Heat pumps circulate water through an insulated aquifer over a hundred metres below, to warm the building in winter and cool it in summer. From the viewpoint of our meeting topic, however, what is significant is how it is structured as a place for a new way of productive working, what the Dutch call het nieuwe werken.

Nobody gets a desk of their own at The Edge; Deloitte’s 2,500 workers there share 1,000 ‘hot desk’ locations, and can also access tiny cubicles or shared meeting facilities, some with massive flat screens which sync with laptops or mobiles. Workspaces are assigned to you according to your schedule for the day, and your ‘home base’ is any locker which you can find empty for the day.

Access to these facilities is driven by a smartphone app used by every worker, and a system which notes everyone’s location and needs and preferences and adjusts the local environment based on your preferences; this is supported by a distributed network of 28,000 sensors.

Paul also commented that people do really want to come to work at The Edge – that’s been a driver of recruitment, there is little absenteeism, and it is somewhere clients want to visit too.  Another thing that users of the building repeatedly praise is the use of natural daylight, which supplies 80% of lighting needs (including through a huge central covered atrium).

Ellipsis Media is a successful content management company, which started above a toy shop in Croydon. They used to have meetings around a particular table in the pub opposite, and as they grew into new premises, they bought that table and installed it as their own little bit of history. Paul mentioned other instances of companies (HSBC, Standard Chartered) using their office space to curate their history – the history of their internal community and its journey.

BMS. Paul also described his engagement with the world’s largest reinsurance broker, BMS, which used the opportunity of their move to One America Square near London Fenchurch Street station. The move brought 13 different federated business units into one shared location. As part of the move, BMS created collaborative physical spaces, including a meetings hub called ‘Connexions’ and an adjacent business lounge with the very best coffee, subsidised snacks and high-speed mobile Internet access. This had a great effect in helping people to break out of the silos of the formerly isolated business units (see Paul’s account of BMS’s journey at http://www.knowledgeetal.com/?p=465).

KHDA. During 2016, on his way back from Iran, Paul went to see friends in Dubai. The Dubai Knowledge and Human Development Authority (KHDA) manages secondary and higher education in Dubai. He showed us pictures of their open-plan workspace – you’ll often see the Chief Executive sitting there. It’s a very informal place – Paul even had a budgerigar fly past his head!

Asian Development Bank. Victoria and Paul worked together (as Sparknow) in Manila, on a project for the Asian Development Bank. ADB’s shared atrium space at the time included a touchcreen with a huge Google Earth display. Victoria added that ADB had long had a traditionally styled library, but had remodelled it, moving the bookshelves to the edge and creating an open central space. The Google map was put there, and used as an ‘attractor’ to cause people to slow down and encounter each other, to cut across the boundaries in the organisation. ADB used the space for a number of knowledge-sharing events, including ‘Inside Thursdays’.

ADB got Paul and Victoria to run a three-day workshop in that space, exploring the use of narrative in the ADB. They were able to construct a temporary collaborative knowledge space, with a long timeline laid out over connected tables, and workstations at which participants could mark out a map of the ADB’s history, and their hopes for its future – and to identify where interviews should be conducted with the oral history practitioners, and what kinds of questions should be asked.

That event was very memorable for its visual components, too. That pop-up knowledge space, and the shared creation of the timeline and other artefacts, created a useful and engaging memory for people when they then looked later at the products of the knowledge work.

ADB has published a paper about this in 2010, called ‘Reflections and Beyond’ (184 pages) which can be retrieved as a PDF from http://reflections.adb.org/wp-content/uploads/2014/08/adb-reflections-and-beyond.pdf. There is also a concise Sparknow narrative about the project at http://www.sparknow.net/publications/ADB_Reflections_Beyond_method.pdf.

Exercise set-up: the Knowledge Space Survey

Before we took our refreshment break, Paul gave a little background to a rolling project he has been co-ordinating, called the ‘Collaborative Knowledge Space Survey’. This qualitative enquiry had already gathered contributions, some by email, and some at events such as at a Masterclass he ran in March at the International Islamic University of Malaysia in Kuala Lumpur. Now it would be NetIKX participants’ chance to contribute!

Paul’s collaborators in collating and reviewing the results are Prof Clive Holtham and Ningyi Jiang at Cass Business School.

Ron followed this observation by some stories about how the COPD patients have been benefitting from the drop-in sessions, and how much they valued it.

To capture people’s ideas about ‘knowledge spaces’ at work (both physical and virtual ones), the survey has ten set questions, but the answers could be open-ended, in textual and often narrative form. There were certainly no multiple-choice answer mechanisms.

Around the walls of the room in which we were meeting, nine posters had been set out, each one with one of the survey questions (except the first question, ‘Which continent do you work in?’), and the space below left blank in readiness for our contributions. Paul asked us to peruse the questions during the break, and choose which of them we would personally like to work with. The nine remaining questions were:

  • Question 2 — Where do you have your most interesting work conversations and do your best work?
  • Question 3 — Do you think your own workspace encourages collaboration? Tell us about a recent incident where this happened and who was involved.
  • Question 4 — Are there any parts of your building or workspace which you associate with memorable moments of work? Tell us about a time and place when this happened.
  • Question 5 — How does where you work reflect the way you work?
  • Question 6 — Have you ever witnessed a company change its physical workspace radically? What happened?
  • Question 7 — What you you understand by the term ‘digital workspace’?
  • Question 8 — In your experience, can you now replace physical workspace with a digital workspace? If so, how? If not, why not?
  • Question 9 — Does your organisation have a workspace strategy, and if so does it include a digital workspace? Please tell us about it.
  • Question 10 — ‘Any Device, Any Time, Anywhere’ is how one organisation now defines its approach to remote working. Looking forward to 2020, what changes do you foresee in the way you work and the devices you will be using?

Each of us should gravitate towards the question that interests us most, and an ideal group size would be 4–6 people. Grouped around our question of choice, we should consider, is there a pattern or theme that we might use in a checklist? And what keywords might we use to ‘tag’ the responses which we chose?

The exercise process

The way our NetIKX group approached the Collaborative Knowledge Space Survey is not the only way it can be done. For a start, the way we assigned ourselves to particular questions meant that by and large each person contributed to thinking about only one of the nine questions – even though Paul declared the Open Space ‘law of two legs’, and we could have moved from one group to another. But the separate group discussions went well in the 30–40 minutes available.

Nobody was attracted to Question 4, and for obvious reasons Question 1 was off the table. Thus we collected reactions to eight out of the ten survey questions. It is decades since I had anything like a ‘regular job’ and worked in a workplace, so I chose to work in the group clustered around Question 8.

After we had filled our posters, Paul prompted each group in turn to share thinking with the rest of the room. You can see the posters themselves which Paul afterwards embedded as images within the slide set, and accompanying this blog post. I also took my recording gear with me around the room to capture what people said in more detail.

Q2: Where do you have your most interesting work conversations, and where do you do your best work? — This group discussed the value of having both quiet places and busy places. Melanie described the Hub at DWP, which is a large area with a coffee bar and lots of different tables. On the poster, the group had noted that humour and banter, for example around the kitchen, brings people together and leave you feeling motivated. When you’re on a journey, on a train, even just walking between places, this has value in freeing up ‘internal conversations – you often need silence ‘so you can hear yourself think’.

The keywords the group chose were human – flexible – adaptable – informal – fun – balance (between external and internal conversation, and between physical and digital) – mindfulness. Emma added that the most interesting and significant conversations are usually in an informal setting, and are often serendipitous.

Q3: Do you think your own work space encourages collaboration? — This group had started by comparing their own workspace experiences. Lissi referred to her ‘collaboration cocktail’ of spaces, ranging from attending NetIKX meetings to sitting up in bed to do her work. Victoria Ward had a range of spaces and reported positively on ‘Slack’ (slack.com) , a cloud-based service which describes itself as ‘real-time messaging, archiving and search for modern teams’ (it’s an acronymn for ‘Searchable Log of All Conversation and Knowledge’!)

Graham Robertson works largely alone and his workspace is a room with no windows. ‘Radical uncubicalisation’ was a phrase that came up from two organisations that are trying to draw people out of their cubicles. Edmund Lee (Historic England) said that when people get a taste of this, they love it, but you need other kinds of constraint in place to make things happen.

Collaboration, said someone, involves interaction between human and human, also human and information. Information has its own kind of structure around the workplace; but humans, it must be remembered, have other goals in life, even when they are at work: getting on with people, getting something to eat, whatever.

Q5: How does where you work reflect how you work? — Naomi Lees (DWP) said that the culture that you work in reflects the physical aspects of where you work. For example Ayo Onotola is a librarian who works in a prison. (It is a compulsory requirement for every prison to have a library as part of the process for the reform and rehabilitation of the inmates.) He said that it may surprise people to know that quite a proportion of the prisoners are illiterate; and many don’t have English as their first language; so the prison runs a number of educational programmes for them. The library is key to that.

But – when you work in a prison library, it is a bit like being a prisoner yourself! When there is some trouble in the prison, there may be a general lockdown, then nobody comes to the library all day. Prisoners’ behaviour in the library is quite different from how they behave in their cells; ‘they see the library as a cool place to come and chill out’, Ayo said. And they are keen to collect books to take back to their cells. (On the poster, there was a note that recently the library has been moved to the canteen space, and is now getting more use.)

David Penfold’s example was the university, which has many different possible work environments and people – staff and students both – move between them and find those that are most conducive to what they want to do right now. And people also do much of their work from home.

Q6 — Have you ever witnessed a company change its physical workspace radically? What happened? — ‘Hot desking’ inevitably came up within this group; one person spoke of a transformation to open plan, hot desking and a clear-desk policy, including senior managers. Yes, there was resistance to this at first, but people have come to realise how working together in this way has encouraged the sharing of ideas quite naturally through conversation. It has to be said that the facilities provided were very good. Prior discussion with the users had raised the need for spaces for private conversation, and they had been provided. There are also ‘meeting pods’ set in the middle of the canteen area.

Good space design is crucial, said another person. Consultations with staff in advance is the key. When he worked at the Department of Energy and Climate Change, they had discovered that often there were serendipitous meetings in lifts that then continued to an adjacent space to continue. In another job, at a research institute, staff had been worried about the place becoming too noisy for concentration; this was met by setting up booths with acoustic shielding, for study or for private conversation.

Canteen spaces are particularly ripe for creative use, and that goes well with a culture that encourages people to take lunch away from their desks. (I remember that when I was doing a series of training workshops at Enterprise Oil, their staff canteen provided lovely food free of charge, which was certainly a motivator in that direction!)

Q7: What do you understand by the term ‘digital workspace’? — This group understood a digital workspace to be something that was open and without boundaries, both boundaries of physical space and time, able to operate 24 hours a day, seven days a week. They noted that this requires broadband that is fast enough. It should enable you to do what you would do in a ‘normal’ or ‘standard’ workspace, but allows for collaboration and the sharing of information.

Paul referred to work he had done in Africa, where there is usually very poor access to the Internet. But people adapted to that by communicating via WhatsApp – short, asynchronous conversations that can be picked up again after a communications breakdown.

Q8: In your experience, can you now replace a physical workspace with a digital workspace? If so, how? If not, why not? — This was the group I was in, with Edmund Lee, and the first thing we decided was that the afternoon’s conversation had had an unspoken bias towards office-type work. If you are a plumber, a farmer or construction worker, a social worker or a surgeon, a shop assistant or other front-line customer service worker, what you can achieve in a digital workspace will be strictly limited. So no, you cannot replace physical workspaces with digital ones, except in some narrowly defined fields.

For most of those at today’s event, a so-called digital workspace can substitute for many aspects of the physical workspace, but that is dependent on how good a digital surrogate you can create to replace the physically of that with which you work. Edmund works with archaeological excavations, and he noted that before you can consider implementing a digital workspace for such work, you have to find a way to make a digital surrogate of the things you work with. An example would be an expert in Roman pottery who has access to the physical artefacts, but nothing more than a digital representation of the site where they were found.

Another issue is the functionality and ‘affordances’ of the digital tools available. There are bandwidth and infrastructual constraints, and there are human factors. When conversations take place over a digital medium, can they convey body language? Paul agreed that is a huge issue, and he had just been running a workshop with Chris Collison on improving work in virtual teams and communities. [Note that Chris Collison is the speaker at the September NetIKX meeting.]

There are also new skill requirements and support issues. Edmund told of how their IT department had installed large digital whiteboards in the main meeting rooms, but didn’t tell anybody how to use them. So, the technology worked for the IT department but for nobody else!

Q9: Does your organisation have a workspace strategy and if so, does it include digital workspace? — This did not result in a poster, but Malcolm Weston reported on their successes at Aecom, which now has grown (by a process of acquisition and amalgamation) to 187,000 employees across 150 countries. That process required workspaces to be brought together, and collaboration between the business units to be enhanced. And so Aecom did set out a formal workplace strategy, to be implemented in every office worldwide.

The implementation in London involved internal change management consultants, and interior design consultants, talking to different teams within the organisation asking them what they liked about their current workspace environment, and what they didn’t like, and what they wanted changed. The new environment was created in the offices in Aldgate Tower.

Aecom staff now work in ‘neighbourhoods’, with the colleagues in their team, though not always at the same desk. Teams which would naturally want to collaborate on delivering work are situated adjacent to each other. There are internal staircases between four of the floors, with open plan breakout areas around them all. There are also small ‘walled sofas’ suitable for taking part in a conference call, meeting rooms which can be booked via a phone app, through to a small lecture theatre. No-one has a fixed computer; everyone has a mobile phone and a laptop; there is secure WiFi. You can also work from home via the VPN.

One of the drivers was to improve customer satisfaction; another was to avoid costly redesign by getting things right through collaboration, first time around. It also meshed with Aecom’s collaborative selling initiative; clients like to come and have meetings at Aecom’s place.

Q.10: Looking forward to 2020, what changes do you foresee in the way you work and the devices you will be using? — This team described a Lloyds Bank demonstration of virtual presence, using a VR headset. They thought one of the challenges of the future might be the emphasis on self-service, and the variety of devices, and perhaps a decentralisation of information storage. The team chose ‘change’ and ‘disruption’ and ‘managing complexity’ as key phrases.

Steve Dale spoke of having recently finished some work for an international organisation spread across thirty countries. Their policy is ‘extreme BYOD’ (bring your own device – no rules at all about what equipment or software to use. They did an audit, and discovered sixty different systems in use. And there were concomitant problems – a lack of collaboration across teams, and how the hell do you find Stuff? They did interviews between stakeholders, and discovered a split between people who like this freedom, and others who flounder in this lack of structure (particularly people newly joining the organisation).

Wrapping up

Paul skipped a number of his slides, which review the survey responses from Lisbon, Kuala Lumpur etc. A couple of slides also pulled out some of the insights which are beginning to emerge in the analysis Paul is doing with Clive Holtham and Ningyi Jiang at Cass Business School.

Paul referred to a meeting he and Victoria had recently had with Neil Usher at BSkyB. Neil has a twelve-point checklist: Daylight – Connectivity – Space – Choice – Control – Comfort – Refresh – Influence – Storage – Colour – Wash – Inclusion. Paul didn’t have time to unpack what these mean; apparently they are explained on Neil’s blog (http://workessence.com/). The two most important aspects, according to Neil, are natural daylight, and giving people choices.

Paul finished the afternoon workshop drawing attention to some closing slides which give contact details for himself and for Victoria.

Paul J Corney – paul.corney[at]knowledgeetal

On Twitter: pauljcorney

On Skype: corneyp

On mobile: +44 (0) 777 608 5857

Victoria Ward — victoria.ward[at]sparknow.net

A personal thought on ‘digital’ vs ‘virtual’

Paul and Victoria contrasted physical spaces where people meet and converse, and digital ones. I would prefer a contrast between physical and virtual spaces. My reason is, that I wish to give the nod to old traditions of knowledge sharing, which used correspondence and publication — instances of which are the so-called Invisible College. Collaboration minus face-to-face did not need an electronic medium to get started; it required shared language, writing, and a means of sending messages.

The Implications of Blockchain for KM and IM

Conrad Taylor writes:

The speakers at the meeting on 6 July 2017 were Marc Stephenson, Noeleen Schenk  and John Sheridan.

Marc Stephenson is the Technical Director at Metataxis. He has worked on the design, implementation and ongoing management of information systems for over 25 years, including organisations in health, central and local government, banking, utilities, new media and publishing. He has architected and implemented many IT solutions, ranging from intranets, document management systems, records management systems, and ECM portals. Marc recognises the need to design solutions that deliver maximum benefit at minimal cost, by focusing on the business, users and crucially the information requirements, rather than unnecessary technology and functionality.

Noeleen Schenk has over twenty years’ experience of working in the information sector as a practitioner, researcher and consultant. Her recent  projects have focused on all aspects of information and knowledge management – from governance to assurance, helping clients successfully manage their information and minimise the risk to their information assets. These projects include information security, information and data handling, information risk management, document and records management. In addition to working with clients, Noeleen is passionately interested in the constantly changing information and knowledge management landscape, the use of technology, and new ways of working – helping business identify critical changes, assess the opportunities then develop options and map out strategies to turn them into reality, taking advantage of the opportunities they present us.

John Sheridan is the Digital Director at The National Archives, where he leads the development of the organisation’s digital archiving capability and the transformation of its digital services. John’s academic background is in mathematics and information technology, with a degree in Mathematics and Computer Science from the University of Southampton and a Master’s Degree in Information Technology from the University of Liverpool. John recently led, as Principal Investigator, an Arts and Humanities Research Council funded project, ‘big data for law’, exploring the application of data analytics to the statute book. More recently he helped shape the Archangel research project, led by the University of Surrey, looking at the applications of distributed ledger technology for archives. A former co-chair of the W3C e-Government Interest Group, John has a strong interest in web and data standards. He serves on the UK Government’s Open Standards Board, which sets data standards for use across government. John was an early pioneer of open data and remains active in that community.

Blockchain is a technology that was first developed as the technical basis for the cryptocurrency Bitcoin, but there has been recent speculation that it might be useful for various information management purposes too. There is quite a ‘buzz’ around the topic, yet it is too complex for many people to figure out, so it’s not surprising that the seminar attracted the biggest turnout of the year so far.

The seminar took the form of three presentations, two from the consultancy Metataxis and one from The National Archive. The table group discussions that followed were simply open and unstructured discussions, with a brief period at the end for sharing ideas.

The subject was indeed complex and a lot to take in. In creating this piece I have gone beyond what we were told on the day, done some extra research, and added my own observations. I hope this will make some things clearer, and qualify some of what our speakers said, especially where it comes to technical details.

Marc Stephenson gives a technical overview

The first speaker was Marc Stephenson, Technical Director at Metataxis, the information architecture and information management consultancy. In the limited time available, Marc attempted a technical briefing.

Marc’s first point was that it’s not easy to define blockchain. It is not just a technology, but also a concept and a framework for ways of working with records and information; and it has a number of implementations, which differ in significant ways from each other. Marc suggested that, paradoxically, blockchain can be described as ‘powerful and simple’, but also ‘subtle, and difficult to understand’. Even with two technical degrees under his belt, Marc confessed it had taken him a while to get his head around it. I sympathise!</p>

The largest and best-known implementation of blockchain so far is the infrastructure for the digital cryptocurrency ‘Bitcoin’ – so much so that many people get the two confused (and others, in my experience, think that some of the features of Bitcoin are essential to blockchain – I shall be suggesting otherwise).

Wikipedia (at http://en.wikipedia.org/wiki/Blockchain)   offers this definition:

A blockchain […] is a distributed database that maintains a continuously growing list of ordered records called blocks. Each block contains a timestamp and a link to a previous block. By design, blockchains are inherently resistant to modification of the data — once recorded, the data in a block cannot be altered retroactively. Through the use of a peer-to-peer network and a distributed timestamping server, a blockchain database is managed autonomously… [A blockchain is] an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically.

Marc then dug further into this definition, but in a way which left some confused about what is specific to Bitcoin and what are the more generic aspects of blockchain. Here, I have tried to tease these apart.

Distributed database — Marc said that a blockchain is intended to be a massively distributed database, so there may be many complete copies of the blockchain data file on server computers in many organisations, in many countries. The intention is to avoid the situation in which users of the system have to trust a single authority.

I am sceptical as to whether blockchains necessarily require this characteristic of distribution over a peer-to-peer network, but I can see that it is valuable where there are serious issues of trust at stake. As we heard later from The National Archive, it is also possible to create similar distributed ledger systems shared between a smaller number of parties which already trust each other.

Continuously growing chain of unalterable ‘blocks’  — The blockchain database file is a sequential chain divided into ‘blocks’ of data. Indeed, when blockchain was first described by ‘Satoshi Nakamoto’, the pseudonymous creator of the system in 2008, the phrase ‘block chain’ was presented as two separate words. When the database is updated by a new transaction, no part of the existing data structure is overwritten. Instead, a new data block describing the change or changes (in the case of Bitcoin, a bundle of transactions) is appended to the end of the chain, with a link that points back to the penultimate (previous) block; which points back to the previous one; and so on back to the ‘genesis block’.

One consequence of this data structure is that a very active blockchain that’s being modified all the time grows and grows, potentially to monstrous proportions. The blockchain database file that maintains Bitcoin has now grown to 122 gigabytes! Remember, this file doesn’t live on one centralised server, but is duplicated many times across a peer-to-peer network. Therefore, a negative consequence of blockchain could be the enormous expense of computing hardware resources and energy involved in a blockchain system.

(As I shall later explain, there are some peculiar features of Bitcoin which drive its bloat and its massive use of computational resources; for blockchains in general, it ain’t necessarily so.)

Timestamping — when a new block is created at the end of a chain, it receives a timestamp. The Bitcoin ‘timestamp server’ is not a single machine, but a distributed function.

Encryption — According to Marc, all the data in a blockchain is encrypted. More accurately, in a cryptocurrency system, crucial parts of the transaction data do get encrypted, so although the contents of the blocks are a matter of public record, it is impossible to work out who was transferring value to whom. (It is also possible to implement a blockchain without any encryption of the main data content.)

Managed autonomously — For Bitcoin, and other cryptocurrencies, the management of the database is done by distributed software, so there is no single entity, person, organisation or country in control.

Verifiable blocks — It’s important to the blockchain concept that all the blocks in the chain can be verified by anyone. For Bitcoin, this record is accessible at the site bitcoin.info.

Automatically actionable — In some blockchain systems, blocks may contain more than data; at a minimum they can trigger transfers of value between participants, and there are some blockchain implementations – Ethereum being a notable example – which can be programmed to ‘do’ stuff when a certain condition has been met. Because this happens without user control, without mediation, all of the actors can trust the system.

Digging into detail

In this section, I am adding more detail from my own reading around the subject. I find it easiest to start with Bitcoin as the key example of a blockchain, then explore how other implementations vary from it.

‘Satoshi Nakamoto’ created blockchain in the first place to implement Bitcoin as a digital means to hold and exchange value – a currency. And exchange-value is a very simple thing to record, really, whereas using a blockchain to record more complex things such as legal contracts or medical records adds extra problems – I’ll look at that later. Let’s start by explaining Bitcoin.

Alice wants to pay Bob. Alice ‘owns’ five bitcoins – or to put it more accurately, the Bitcoin transaction record verifies that she has an entitlement to that amount of bitcoin value: the ‘coins’ do not have any physical existence. She might have purchased them online with her credit card, from a Bitcoin broker company such as eToro. Now, she wants to transfer some bitcoin value to Bob, who in this story is providing her with something for which he wants payment, and has emailed her an invoice to the value of 1.23 BTC. The invoice contains a ‘Bitcoin address’ – a single-use identifier token, usually a string of 34 alphanumeric characters, representing the destination of the payment.

To initiate this payment, she needs some software called a ‘Bitcoin wallet’. Examples are breadwallet for the iPhone and iPad, or Armory for Mac, Linux and Windows computers. There are also online wallets. Users may think, ‘the wallet is where I store my bitcoins’. More accurately, the wallet stores the digital credentials you need to access the bitcoin values registered in the blockchain ledger against your anonymised identity.

Launching her wallet, Alice enters the amount she wants to send, plus the Bitcoin address provided by Bob, and presses Send.

For security, Alice’s wallet uses public–private key cryptography to append a scrambled digital signature to the resulting message. By keeping her private key secret, Alice is guaranteed that no-one can spoof Bitcoin into thinking that the message was sent to the system by anyone other than her. The Bitcoin messaging system records neither Alice’s nor Bob’s identity in the data record, other than in deeply encrypted form: an aspect of Bitcoin that has been criticised for its ability to mask criminally-inspired transactions.

At this stage, Alice is initiating no more than a proposal, namely that the Bitcoin blockchain should be altered to show her wallet as that bit ‘emptier’, and Bob’s a bit ‘fuller’. Implementing computers on the network will check to see whether Alice’s digital signature can be verified with her public key, that the address provided by Bob is valid, and that Alice’s account does in fact have enough bitcoin value to support the transaction.

If Alice’s bitcoin transaction proposal is found to be valid and respectable, the transaction can be enacted, by modifying the blockchain database (updating the ledger, if you like). As Marc pointed out, this is done not by changing what is there already, but by adding a new block to the end of the chain. Multiple transactions get bundled together into one Bitcoin block, and the process is dynamically managed by the Bitcoin server network to permit the generation of just one new such block approximately every ten minutes – for peculiar reasons I shall later explain.

Making a block: the role of the ‘hash’

The blocks are generated by special participating servers in the Bitcoin network, which are called ‘miners’ because they get automatically rewarded for the work they do by having some new Bitcoin value allocated to them.

In the process of making a block to add to the Bitcoin blockchain, the first step is to gather up the pending transaction records, which are placed into the body of the new block. These transaction records themselves are not encrypted, though the identities of senders and receivers are. I have heard people say that the whole blockchain is irreversably encrypted, but if you think about it for a second, this has to be nonsense. If the records were rendered uninspectable, the blockchain would be useless as a record-keeping system!

However, the block as a whole, and beyond that the blockchain, has to be protected from accidental or malicious alteration. To do this, the transaction data is put through a process called ‘cryptographic hashing’. Hashing is a well-established computing process that feeds an arbitrarily large amount of data (the ‘input’ or ‘message’) through a precisely defined algorithmic process, which reduces it down to a fixed-length string of digits (the ‘hash’). The hashing algorithm used by Bitcoin is SHA-256, created by the US National Security Agency and put into the public domain.

By way of example, I used the facility at http://passwordsgenerator.net/sha256-hash-generator to make an SHA-256 hash of everything in this article up to the end of the last paragraph (in previous edits, I should add; I’ve made changes since). I got 9F0B 653D 4E6E 7323 4E03 B04C F246 4517 8A96 DFF1 7AA1 DA1B F146 6E1D 27B0 CA75 (you can ignore the spaces).

The hash string looks kind of random, but it isn’t – it’s ‘deterministic’. Applying the same hashing algorithm to the same data input will always result in the same hash output. But, if the input data were to be modified by even a single character or byte, the resulting hash would come out markedly different.

Note that the hash function is, for all practical purposes, ‘one-way’. That is, going from data to hash is easy, but processing the hash back into the data is impossible: in the case of the example I just provided, so much data has been discarded in the hashing process that no-one receiving just the hash can ever reconstitute the data. It is also theoretically possible, because of the data-winnowing process, that another set of data subjected to the same hashing algorithm could output the same hash, but this is an extremely unlikely occurrence. In the language of Bitcoin, the hashing process is described as ‘collision-resistant’.

The sole purpose of this hashing process is to build a kind of internal certificate, which gets written into a special part of the block called the ‘header’. Here, cryptography is not being used to hide the transaction data, as it might in secret messaging, but to provide a guarantee that the data has not been tampered with.

Joining the hash of the transaction data in the header are some other data, including the current timestamp, and a hash of the header of the preceding block in the chain. These additions are what gives the blockchain its inherent history, for the preceding block also contained a hash of the header of the block before that, and so on down the line to the very first block ever made.

The role of the ‘miner’ in the Bitcoin system

Now, as far as I can tell, there is nothing in principle wrong with having the blockchain-building process run by one trusted computer, with the refreshed blockchain perhaps being broadcast out at intervals and stored redundantly on several servers as a protection against disaster.

But that’s not the way that Bitcoin chose to do things. They wanted the block-writing process to be done in a radically decentralised way, by servers competing against each other on a peer-to-peer network; they also chose to force these competing servers to solve tough puzzles that are computationally very expensive to process. Why?

Because intimately entangled in the way the Bitcoin ecology builds blocks is the way that new bitcoins are minted; at present the ‘reward’ from the system to a miner-machine for successfully solving the puzzle and making the latest block in the chain is a fee of 12.5 fresh new bitcoins, worth thousands of dollars at current exchange rates. That’s what motivates private companies to invest in mining hardware, and take part in the game.

This reward-for-work scheme is why the specialised computers that participate in the block-building competition are called ‘miners’.

Let’s assume that the miner has got as far through the process as verifying and bundling the transaction data, and has created the hash of the data for the header. At this point the Bitcoin system cooks up a mathematical puzzle based on the hash, which the ‘miner’ system making the block has to solve. These mathematical puzzles (and I cannot enlighten you more about their precise nature, it’s beyond me!) can be solved only by trial and error methods. Across the network, the competing miner servers are grinding away, trying trillions of possible answers, hashing the answers and comparing them to the header hash and the puzzle instructions to see if they’ve got a match.

This consumes a lot of computing power and energy – in 2014, one bitcoin ‘mining farm’ operator, Megabigpower in Washington state USA, estimated that it was costing 240 kilowatt-hours of electricity per bitcoin earned, the equivalent of 16 gallons of petrol. It’s doubtless gone up by now. The hashing power of the machines in the Bitcoin network has surpassed the combined might of the world’s 500 fastest supercomputers! (See ‘What is the Carbon Footprint of a Bitcoin?’ by Danny Bradbury: https://www.coindesk.com/carbon-footprint-bitcoin.

When a miner ‘thinks’ it has a correct solution, it broadcasts to the rest of the network and asks other servers to check the result (and, thanks to the hash-function check, though solving the problem is hard, checking the result is easy). All the servers that ‘approve’ the solution – strangely, it’s called a ‘nonce’ – will accept the proposed block, now timestamped and with a hash of the previous block’s header included to form the chainlink, and they update their local record of the blockchain accordingly. The successful miner is rewarded with a transaction that earns it a Block Reward, and I think collects some user transaction fees as well.

Because Bitcoin is decentralised, there’s always the possibility that servers will fall out of step, which can cause temporary forks and mismatches at the most recent end of the blockchain, across the network (‘loose ends’, you might call them). However, the way that each block links to the previous one, plus the timestamping, plus the rule that each node in the network must work with the longest extant version it can find, means that these discrepancies are self-repairing, and the data store is harmonised automatically even though there is no central enforcing agency.

The Bitcoin puzzle-allocation system dynamically adjusts the complexity of the puzzles so that they are being solved globally at a rate of about only six an hour. Thus, although there is a kind of ‘arms race’ between competing miners, running on ever faster competing platforms, the puzzles just keep on getting tougher and tougher to crack, and this is what controls the slow increase in the Bitcoin ‘money supply’. Added to this is a process by which the rate of reward for proof-of-work is being slowly decreased over time, which in theory should make bitcoins increasingly valuable, rewarding the people who own them and use them.

As I shall shortly explain, this computationally expensive ‘proof-of-work’ system is not a necessary feature of blockchain per se, and other blockchains use a less expensive ‘proof-of-stake’ system to allocate work.

Disentangling blockchain from Bitcoin

To sum up, in my opinion the essential characteristics of blockchain in general, rather than Bitcoin in particular, are as follows (and compare this with the Wikipedia extract quoted earlier):

  • A blockchain is a data structure that acts as a consultable ledger for recording sequences of facts, statuses, actions or transactions that occur over time. So it is not a database in the sense that a library catalogue is; still less could it be the contents of that library; but the lending records of that library could well be in blockchain form, because they are transactions over time.
  • New data, such as changes of status of persons or objects, are added by appending blocks of re-formed data; each block ‘points’ towards the previous one, and each block also gets a timestamp, so that together the blocks constitute a chain from oldest to newest.
  • The valuable data in the blocks are not necessarily encrypted (contrary to what some people say), so that with the right software, the record is open to inspection.
  • However, a fairly strong form of cryptographic hashing is applied to the data in each block, to generate a kind of internal digital certificate, which acts as a guarantee that the data has not become corrupted or maliciously altered. The hash string thus generated is recorded in the head of the block; and the whole head of the block will be hashed and embedded in the head of the following block, meaning that any alteration to a block can be detected.

And I believe we can set aside the following features which are peculiarities of Bitcoin:

  • The Bitcoin blockchain is a record of all the transactions that have ever taken place between all of the actors within the Bitcoin universe, which is why it is so giganormous (to coin a word). Blockchains that do not have to record value exchange transactions can be much smaller and non-global in scope – my personal medical record, for example, would need to journal only the experiences of one person.
  • All the data tracked by the Bitcoin blockchain has to live inside the blockchain; but blockchain systems can also be hybridised by having them store secure and verified links to other data repositories. And that’s a sensible design choice where the entire data bundle contains binary large objects (BLOBs) such as x-rays, scans of land title deeds, audio and video recordings, etc.
  • The wasteful and computationally expensive ‘proof of work’ test faced by Bitcoin miners is, to my mind, totally unnecessary outside of that kind of cryptocurrency system, and is a burden on the planet.

Marc shows a block

In closing his presentation, Marc displayed a slide image of the beginning of the record of block number 341669 inside the Bitcoin blockchain, from back in February 2015 when the ‘block reward’ for solving a ‘nonce’ was 25 Bitcoins. You can follow this link to examine the whole block on bitcoin.info: https://blockchain.info/block/0000000000000000062e8d7d9b7083ea45346d7f8c091164c313eeda2ce5db11. The PDF version of this article (see below) contains some screen captures of this online record.

That block carries records of 1,031 transactions, of a value of 1,084 BTC, and it is about 377 kB in size (and remember, these blocks add up!) The transaction record data can be clearly read, even thought it will not make much sense to human eyes because of the anonymisation provided by the encrypted user address of the sender, and the encrypted destination address for the receiver. Thus all we can see that ‘17p3BWzFeqh7DLELpodxt2crQjisvDbC95’ sent 50&nbsp;BTC to ‘1HEhEpnDhRMUEQSxSWeV3xBoxdSHjfMZJ5’.

Other cryptocurrencies, other blockchain methods

Bitcoin has had quite a few imitators; a July 2017 article by Joon Ian Wong listed nine other cryptocurrencies – Ethereum, Etherium Classic, Ripple, Litecoin, Dash, NEW, IOTA, Monero and EOS. (Others not mentioned include Namecoin, Primecoin, Nxt, BlackCoin and Peercoin.) That article also points to how unstable the exchange values of cryptocurrencies can be: in a seven-day period in July, several lost over 30% of their dollar values, and $7 billion of their market value was wiped out!

From our point of view, what’s interesting is a couple of variations in how alternative systems are organised. Several of these systems have ditched the ‘proof-of-work’ competition as a way of winning the right to make the next block, in favour of some variant of what’s called ‘proof-of-stake’.

As an example, consider Nxt, founded in late 2013 with a crowd-sourced donation campaign. A fixed ‘money’ supply of a billion NXT coins was then distributed, in proportion initially to the contributions made; from this point, trading began. Within the Nxt network, the right to ‘forge’ the next block in the transaction record chain is allocated partly on the basis of the amount of the currency a prospective ‘forger’ holds (that’s the Stake element), but also on the basis of a randomising process. Thus the task is allocated to a single machine, rather than being competed for; and without the puzzle-solving element, the amount of compute power and energy required is slight – the forging progess can even run on a smartphone! As for the rewards for ‘playing the game’ and forging the block, the successful block-forger gains the transaction fees.

Marc specifically mentioned Ethereum, founded in 2014–15, the currency of which is called the ‘ether’. In particular he referred to how Ethereum supports ‘Smart Contracts’, which are exchange mechanisms performed by instructions in a scripting language being executed on the Etherium Virtual Machine – not literally a machine, but a distributed computing platform that runs across the network of participating servers. Smart contracts have been explored by the bank UBS as a way of making automated payments to holders of ‘smart bonds’, and a project called The DAO tried to use the Etherium platform to crowd-fund venture capital. The scripts can execute conditionally – the Lighthouse project is a crowd-funding service that makes transfers from funders to projects only if the funding campaign target has been met.

Other uses of blockchain distributed ledgers

In October 2015, a feature article in The Economist pointed out that ‘the technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the cryptocurrency.’ One of the areas of application they highlighted was the secure registration of land rights and real-estate transactions, and a pioneer in this has been Lantmäteriet, Sweden’s Land Registry organisation.

Establishing a blockchain-based publicly inspectable record about the ownership (and transfer of ownership) of physical properties poses some different problems from those that simply transfer currency. The base records can include scans of signed contracts, digital photos, maps and similar objects. What Lantmäteriet aims to collect in the blockchain are what it dubs ‘fingerprints’ for these digital assets – SHA-256 hashes computed from the digital data. You cannot tell from a fingerprint what a person looks like, but it can still function as a form of identity verification. As a report on the project explains:

‘A purchasing contract for a real estate transaction that is scanned and becomes digital is an example. The hash that is created from the document is unique. For example, if a bank receives a purchasing contract sent via email, the bank can see that the document is correct. The bank takes the document and run the algorithm SHA-256 on the file. The bank can then compare the hash with the hash that is on the list of verification records, assuming that it is available to the bank. The bank can then trust that the document really is the original purchasing contract. If someone sends an incorrect contract, the hash will not match. Despite the fact that email has a low level of security, the bank can feel confident about the authenticity of the document.’ (‘The Land Registry in the blockchain’ — http://ica-it.org/pdf/Blockchain_Landregistry_Report.pdf)

In the UK, Her Majesty’s Land Registry has started a project called ‘Digital Street’ to investigate using blockchain to allow property ownership changes to close instantaneously. Greece, Georgia and Honduras have similar projects under way.

In Ghana, there is no reliable nationwide way of registering ownership of land and property, but a nonprofit project called Bitland is drawing up plans for a blockchain-verified process for land surveys, agreements and documentation, which – independent of government – will provide people with secure title (www.bitland.world). As they point out, inability to prove ownership of land is quite common across Africa, and this means that farmers cannot raise bank capital for development by putting up land as security.

Neocapita is a company that is developing Stoneblock as a decentralised blockchain-based registration service for any government-managed information, such as citizen records. They are working in collaboration with the United Nations Development Program, World Vision, and two governments (Afghanistan and Papua New Guinea), initially around providing a transparent record of aid contributions, and land registry.

Noeleen Schenk on blockchain and information governance

After Marc Stephenson had given his technical overview of Blockchain, Noeleen Schenk (also of Metataxis) addressed the issue of what these developments may mean for people who work with information and records management, especially where there are issues around governance.

Obviously there is great interest in blockchain in financial markets, securities and the like, but opportunities are also being spotted around securing the integrity of the supply chain and proving provenance. Walmart is working with IBM on a project that would reliably track foodstuffs, from source to shelf. The Bank of Canada is looking towards using blockchain methods to verify customer identities onwards, on the basis that the bank has already gone through identity checks when you opened your account. Someone in the audience pointed out that there are also lots of applications for verified records of identity in the developing world, and Noeleen mentioned that Microsoft and the UN are looking at methods to assist the approximately 150 million people who lack proof of identity.

Google DeepMind Health is looking at using some blockchain-related methods around electronic health records, in a concept called ‘Verifiable Data Audit‘, which would automatically record every interaction with patient data (changes, but also access). They argue that health data needn’t be as radically decentralised as in Bitcoin’s system – a federated structure would suffice – nor is proof-of-work an appropriate part of the blockmaking process in this context. The aim is to secure trust in the data record (though ironically, DeepMind was recently deemed to have handled 1.6 million Royal Free Hospital patient records inappropriately).

Noeleen referred to the ISO standard on records management, ISO 15489-1, which gives as the characteristics of ‘authoritative records’ – meeting standards for authenticity, reliability, integrity and usability. What has blockchain to offer here?

Well, where a blockchain is managed on a decentralised processing network, one advantage can be distributed processing power, and avoidance of the ‘single point of failure’ problem. The use of cryptographic hashes ensures that the data has not been tampered with, and where encryption is used, it helps secure data against unauthorised access in the first place.

Challenges to be solved

Looking critically at blockchain with an information manager’s eye, Noeleen noticed quite a few challenges, of which I highlight some:

  • Private blockchains are beginning to make their appearance in various sectors (the Walmart provenance application is a case in point). This raises questions of what happens when different information management systems need to interoperate.
  • In many information management applications, it is neither necessary nor desirable to have all of the information actually contained within the block (the Lantmäteriet system is a case in point). Bringing blockchain into the picture doesn’t make the problem of inter-relating datasets go away.
  • Blockchain technology will impact the processes by which information is handled, and people’s roles and responsibilities with that process. Centres of control may give way to radical decentralisation.
  • There will be legal and regulatory implications, especially where information management systems cross different jurisdictions.
  • Noeleen has noticed that where people gather (with great enthusiasm) to discuss what blockchain can do, there seems to be very poor awareness amongst them of well-established record-keeping theory, principles, and normal standards of practice. The techies are not thinking enough about information management requirements.

These issues require information professionals to engage with the IT folks, and advocate the incorporation of information and record-keeping principles into blockchain projects, and the application of information architectural rigour.

Intermediate discussion

Following Noeleen’s presentation, there were some points raised by the audience. One question was how, where the blockchain points to data held externally, that external data can itself be verified, and how it can be secured against inappropriate access.

Someone made the point that is is possible to set up a ‘crypotographic storage system’ in which the data is itself encrypted on the data server, using well established public–private key encryption methods, and therefore accessible only to those who have access to the appropriate key. As for the record in the blockchain, what that stores could be the data location, plus the cryptographic hash of the data, so that any tampering with the external data would be easy to detect.

What blockchain technology doesn’t protect against, is bad data quality to start with. I’m reminded of a recent case in which it emerged that a sloppy clinical coder had entered a code on a lady’s record, indicating that she had died of Sudden Infant Death Syndrome (happily, she was very much alive). That transaction can never be erased from the blockchain – but it doesn’t stop the record being corrected after.

John Sheridan: Blockchain and the Archive: the TNA experience

Our third presentation was from John Sheridan, the Digital Director at The National Archives (TNA), with the title ‘Application of Distributed Ledger Technology’. He promised to explain what kinds of issues the Archive worries about, and where they think blockchains (or distributed ledgers more generally) might help. On the digital side of TNA, they are now looking at three use-cases, which he would describe.

John remarked that the State gathers information ‘in order to make Society legible to it’ – so that it might govern. Perhaps The Domesday Book was one of the world’s first structured datasets, collected so that the Norman rulers might know who owned what across the nation, for taxation purposes. The Archive’s role, on the other hand, is to enable the citizen to see the State, and what the State has recorded, by perusing the record of government (subject to delays).

Much of the ethos of the TNA was set by Sir Hilary Jenkinson, of the Public Record Office (which merged with three other bodies to form TNA in 2003). He was a great contributor to archive theory, and in 1922 wrote A Manual of Archive Administration (text available in various formats from The Internet Archive, https://archive.org/details/manualofarchivea00jenkuoft). TNA still follows his attitude and ideas about how information is appraised and selected, how it is preserved, and what it means to make that information available.

An important part of TNA practice is the Archive Descriptive Inventory – a hierarchical organisation of descriptions for records, in which is captured something of the provenance of the information. ‘It’s sort of magnificent… it kind of works,’ he said, comparing it to a steam locomotive. But it’s not the best solution for the 21st century. It’s therefore rather paradoxical that TNA has been running a functional digital archive with a mindset set that is ‘paper all the way down’ – a straight line of inheritance from Jenkinson, using computers to simulate a paper record.

Towards a second-generation digital archive

It’s time, he said, to move to a second-generation approach to digital archive management; and research into disruptive new technologies is important in this.

For the physical archive, TNA practice has been more or less to keep everything that is passed to it. That stuff is already in a form that they can preserve (in a box), and that they can present (only eyes required, and maybe reading spectacles). But for the digital archive, they have to make decisions against a much more complex risk landscape; and with each generation of technological change, there is a change in the digital preservation risks. TNA is having to become far more active in making decisions about what evidences the future may want to have access to; and, which risks they will seek to mitigate, and which ones they won’t.

They have decided that one of the most important things TNA must do, is to provide evidence for purposes of trust – not only in the collection they end up with, but also in the choices that they have made in respect of that collection. Blockchain offers part of that solution, because it can ‘timestamp’ a hash of the digital archive asset (even if they can’t yet show it to the public), and thereby offer the public an assurance, when the archive data is finally released, that it hasn’t been altered in the meantime.

Some other aims TNA has in respect of the digital archive include: being more fluid about how an asset’s context is described; dealing with uncertainties in provenance, such as about when a record was created; and permitting a more sophisticated, perhaps graduated, form of public access, rather than just now-you-can’t-see-it, now-you-can. (They can’t simply dump everything on the Web – there are considerations of privacy, of the law of defamation, of intellectual property and more besides.)

The Archangel project

Archangel is a brand new project in which TNA is engaged together with the University of Surrey’s Centre for the Digital Economy and the Open Data Institute. It is one of seven projects that EPSRC is funding to look at different contexts of use for distributed ledger technology. Archangel is focused specifically on public digital archives, and the participants will try to work with a group of other memory institutions.

The Archangel project will not be using the blockchain methods that Marc had outlined. Apparently, they have their own distributed ledger technology (DLT), with ‘permissioned’ access.

The first use-case, which will occupy them for the first six months, will focus on a wide variety of types of research data held by universities: they want to see if they can produce sets of hashes for such data, such that at a later date, when the findings of the research are published and the data is potentially archived, any question of whether the data has been tampered with or manipulated can be dealt with by cryptographic assurance spread across a group of participating institutions. (The so-called ‘Climategate’ furore comes to mind.)

The second use-case is for a more complex kind of digital object. For example, TNA preserves the video record of proceedings of The Supreme Court. In raw form, one such digital video file could weigh in at over a terabyte! Digital video transcoding methods, including compression algorithms, are changing at a rapid pace, so that in a decade’s time it’s likely that the digital object provided to the public will have to have been converted to a different file format. How is it possible to create a crypographic hash for something so large? And is there some way of hashing not the bit sequence, but the informational content in the video?

It’s also fascinating to speculate about how machines in future might be able to interpret the informational content in a video. At the moment, a machine can’t interpret the meaning in someone’s facial expressions – but maybe in the future?

For this, they’ll be working with academics who specialise in digital signal processing. They are also starting to face similar questions with ‘digital surrogates’ – digital representations of an analogue object.

The third use-case is about Deep Time. Most people experimenting with blockchain have a relatively short timescale over which a record needs to be kept in verifiable form, but the aspirations of a national archive must looks to hundreds, maybe thousands of years.

Another important aspect of the Archangel project is the collaboration that is being sought between memory institutions, which might reach out to each other in a concerted effort to underscore trust in each others’ collections. On a world scale this is important because there are archives and collections at significant risk – in some places, for example, people will turn up with Kalashnikovs to destroy evidence of human rights abuses.

 

Discussions and some closing thoughts

Table group discussions: NetIKX meetings typically feature a ‘second half’, which is made up of table-group discussions or exercises (syndicate sessions), followed by a summing-up plenary discussion. However, the speakers had not organised any focused discussion topics, and certainly the group I was in had a fairly rambling discussion trying to get to grips with the complexity and novelty of the subject. Likewise, there was not much ‘meat’ that emerged in the ten minutes or so of summing up.

One suggestion from Rob Begley, who is doing some research into blockchain, was that we might benefit from reading Dave Birch’s thoughts on the topic – see his Web site at http://www.dgwbirch.com. However, it’s to be borne in mind that Birch comes at the topic from a background in electronic payments and transactions.

My own closing thoughts: There is a lot of excitement – one might say hype – around blockchain. As Noeleen put it, in the various events on blockchain she had attended, a common attitude seems to be ‘The answer is blockchain! Now, what was the problem?’ As she also wisely observed, the major focus seems to be on technology and cryptocurrency, and the principles of information and records management scarcely get a look-in.

The value of blockchain methods seem to centre chiefly on questions of trust, using a cryptographic hashing and a decentralised ledger system to create a hard-to-subvert time-stamped record of transactions between people. The transactional data could be about money (and there are those who suggest it is the way forward for extending banking services in the developing world); the application to land and property registration is also very promising.

Another possible application I’m interested in could be around ‘time banking’, a variant of alternative currency. For example in Japan, there is a scheme called ‘Fureai Kippu’ (the ‘caring relationship ticket’), which was founded in 1995 by the Sawayaka Welfare Foundation as a trading scheme in which the basic unit of account is an hour of service to an elderly person who needs help. Sometimes seniors help each other and earn credits that way, sometimes younger people work for credits and transfer them to elderly relatives who live elsewhere, and some people accumulate the credits themselves against a time in later life when they will need help. It strikes me that time-banking might be an interesting and useful application of blockchain – though Fureai Kippu seems to get on fine without it.

When it comes to information-management applications that are non-transactional, and which involve large volumes of data, a blockchain system itself cannot cope: the record would soon become impossibly huge. External data stores will be needed, to which a blockchain record must ‘point’. The hybrid direction being taken by Sweden’s Lantmäteriet, and by the Archangel project, seems more promising.

As for the event’s title ‘ The implications of Blockchain for KM and IM’ — my impression is that blockchain offers nothing to the craft of knowledge management, other than perhaps to curate information gathered in the process.

Some reading suggestions

Four industries blockchain will disrupt (https://www.researchandmarkets.com/blog/4-industries-blockchain-will-disrupt)

Two billion people lack access to a bank account. Here are 3 ways blockchain can help them (https://www.weforum.org/agenda/2017/06/3-ways-blockchain-can-accelerate-financial-inclusion)

TED talk, Don Tapscott on ‘how the blockchain is changing money and how the blockchain is changing money and business (https://www.ted.com/talks/don_tapscott_how_the_blockchain_is_changing_money_and_business)

Why Ethereum holds so much promise (http://uk.businessinsider.com/bitcoin-ethereum-price-2017-7)

Wikipedia also has many valuable articles about blockchain, cryptographic hashing, etc.

Note:

The original version of this article can be found at http://www.conradiator.com/kidmm/netikx-jul2017-blockchain.html. You can also download a pdf (9 pages; 569 kB): http://www.conradiator.com/kidmm/netikx-resource/NetIKX-blockchain.pdf.

Gurteen Knowledge Café – 16 March 2017

Conrad Taylor writes:

In 2017, its tenth anniversary year, NetIKX is running a series of meetings with speakers who have spoken to us before. In March we invited David Gurteen to speak around the topic of ‘entrained and entrenched thinking’, and other constraints on knowledge sharing – and, what we can do about it. Specifically we wanted him to run one of his Knowledge Café events for us, in part because that process incorporates features designed to widen the scope of conversation and the consideration of diverse points of view.

As usual, these notes are constructed from my personal perspective.

About entrenched and entrained thinking

‘Entrenched’ thinking is something we pretty much understand. It’s when people refuse to consider the validity of any idea but their own, and it is often encountered in groups that see themselves as actively in opposition to another group. They are ‘dug in’ and refuse to budge. We’ve seen a lot of that in politics in the last year, but it occurs in all sorts of social and business environments too.

The phrase ‘entrained thinking’ is less familiar. It may have been coined by Dave Snowden and Mary Boone in their article in Harvard Business Review in 2007, where they define it as ‘a conditioned response that occurs when people are blinded to new ways of thinking by the perspectives they acquired through past experience, training, and success’. They note that both leaders and experts can fall into entrained thinking habits, which cause them to ignore both insights from alternative perspectives and those offered by people whose opinions they have come to disregard as irrelevant.

Evolutionary biology suggests reasons why falling back on available quick-and-dirty patterns of thinking (heuristics) has survival advantages over thinking everything through carefully from every conceivable angle; as Dave Snowden says, when you come across a lion in savannah country, it’s best not to analyse the situation too thoroughly before legging it up a tree. In his book Administrative Behavior (1947), Herbert Simon referred to such just-good-enough thinking as satisficing, and a study of the nature and role of heuristics in decision making was also central to Amos Tversky’s and Daniel Kahneman’s argument in Judgement Under Uncertainty: Heuristics and Biases (1982), which also introduced the concept of cognitive bias – a concept to which David Gurteen made reference.

However, there are times and situations in which it is good to cast a wider net for alternative ideas, which may turn out to be better than the established, so-called ‘tried and tested’ ones. The technique of brainstorming was pioneered in the field of advertising by Alex Osborn in 1939, and Edward de Bono introduced the concept of lateral thinking in 1967, following that with a veritable spate of books on creative thinking techniques.  (Note: The brainstorming process has been brought into question recently; see http://www.newyorker.com/magazine/2012/01/30/groupthink)

In this seminar and Knowledge Café workshop, David Gurteen focused on those blockages to ideas production and sharing that can occur in meetings and group conversations, and the actual practice of his Café technique shows some ways this can be done. So let’s get to understand the Café process, then move on to how David introduced our session, and close with a brief report of what came up in the closing in-the-round plenary session.

Introducing the Café process

David’s Knowledge Café process is a version of the World Café technique first devised in the mid 1990s by Dr Juanita Brown and David M Isaacs, an American couple who work with organisations to unleash ‘collective intelligence’ through ‘conversations that matter’. (See http://www.theworldcafe.com/) These techniques have been used by industrial companies, such as Hewlett-Packard and Aramco, and by local governments and non-profits in the context of social and economic development and community relations.

David says that he adopted the format as an antidote to ‘Death by PowerPoint’. He started running his Knowledge Café series in September 2002, in the Strand Palace Hotel. A number of NetIKX members have taken part in free London Knowledge Café events, which David facilitates about six times a year. More information can be found on his knowledge café website http://knowledge.cafe/

David has also run such sessions professionally for organisations across Europe, Asia and the Americas. They seem to work well in a wide range of cultural settings – even, he said, in those Asian cultures in which people often defer to authority. In a small group, it is easier to speak up about what you think, though as an organiser of such an event you may need to ensure that the groups are made up of equals.

The essence of the Café technique is to enable discussion around an open-ended question. Participants are divided into groups of three, four or at most five people, sat around tables (note: this is smaller than the typical size of a table group at a NetIKX workshop). In David’s events, the starting question is framed by having a speaker make a very short initial presentation of the topic – typically ending with the posing of such a question.

After the discussion around tables has gone on for some time, generally 15 minutes, the facilitator asks the table groupings to break up and re-form – for example, two people might stay on the same table while two move off to join other tables. After another 15 minutes’ conversation, the table groups are once again re-organised for a third round. David never uses more than three rounds of conversation in his own practice. The general aim of such Café techniques is to help people to accumulate a cascade of perspectives, and to widen their thinking.

There are variations on this theme. One World Café practice is to put a large sheet of paper on each table and encourage people to jot down ideas or doodle pictures during their conversations, so that the next group gets a sense of what has gone before. Another version appoints a ‘table host’ who stays with the table, relays ideas from the previous round, and encourages the new group to add ideas and perspectives to what has gone before. Such a person might also act as a rapporteur in a closing plenary session.

David’s practice dispenses with table-level facilitators (and doodle pads and rapporteurs), which makes a Gurteen Café easier to organise. The accumulation of perspectives tends to happen anyway, as people tend to share, with their new group, the ideas that came up in the previous one.

In David’s version of the Café, he said, there is no reporting back. The café’s outcomes are about what each individual takes away in his or her head – and that will be different for each person. As Theodore Zeldin says, the best conversations are those from which you emerge as a slightly different person.

However, David later qualified that by mentioning circumstances in which gathering and reporting the ideas that surface can be very valuable as a contribution to a problem-solving process – for a company or project, for example. His own general events tend to end these days with a closing session bringing everyone together in a circle for freeform sharing of ideas ‘in the round’ – space permitting. We did this at the NetIKX Café.

David explained a few guiding principles. The Café is about dialogue, not debate – it’s not about winning arguments, but nor is it about seeking consensus. The idea is simply to bring ideas and issues to the surface. And it is OK to drift off-topic.

Asked whether it is different to run a Café session inside a particular organisation, David responded that he’s found that the format can be used for brainstorming or as an adjunct to problem-solving; in that case, one should start in advance by defining the objective, and design the process accordingly. For such gatherings, you probably do want to include a method for capturing the ideas that arise. But any such capture mechanism must not get in the way of the conversation – giving one person a flipchart and a pen will put them in charge and distort the free exchange of ideas.

Our meeting topic

David explained that in his short pre-Café presentation he would touch on some challenges that we need to overcome in order to make better decisions, and to be more creative in coming up with ideas. In the café process we would then discuss how we might mitigate against these challenges.

Cognitive bias. David recommended that we take a look at the Wikipedia article about ‘Cognitive bias’. That in turn links to a page ‘List of cognitive biases’ — something like 200 of them, although it has been argued they can be grouped into four broader categories, arising either from too much information, not enough meaning, the need to act/decide quickly, and limits of memory. One of the ideas that has made it into common parlance recently is ‘confirmation bias’ – we tend to pay heed to ideas that reinforce our existing views.

Entrained thinking. This seems to be a relatively new idea, put forward by Dave Snowden and Mary Boone as described above. The idea is that we are conditioned towards certain ways of thinking, and it can be because of our education and training. We are also influenced by our culture, the environment in which we have grown up, and our experiences. These influences are so subtle and ingrained that we are probably not aware of them.

David asked me (Conrad) if I see things the same way. I replied that I do – but that although ‘entrained thinking’ appears to be a new term, it isn’t really a new idea. When I was studying the History of Science at Glasgow University, an influential book was Thomas Kuhn’s The Structure of Scientific Revolutions (1962) – the book that introduced the phrase ‘paradigm shift’ to the English language. Kuhn argued that scientific progress was not, as generally assumed, a matter of development by accumulation of facts and theories, but more episodic, involving the overthrow of previously prevailing ways of seeing the world. And until the new paradigm prevails, the old one will have its deeply entrained defenders.

One example that Kuhn analysed at length was ‘the Copernican revolution’, which argued that the earth orbits the sun, rather than the other way around. Darwin’s theory of evolution also met with strong opposition from people invested in a creationist narrative and Biblical timescale for earth’s existence, and more recently the theory of plate tectonics and continental drift was resisted and mocked until the 1960s – yet it is now one of the ground truths of geological science. So Kuhn’s idea of a ‘paradigm’ – as a way of thinking that one considers normal and natural (but may later be replaced by a better one) – does carry in it a notion similar to ‘entrained thinking’.

Entrenched opinions. People may be resistant to taking new ideas on board – they take an entrenched position. Such people are not prepared to listen; they ‘know they are right’ and refuse to consider an alternative interpretation. In this case people may be very conscious of their views, which are closely bound up with their sense of themselves.

‘Speaking truth to power’ is a phrase that we hear a lot – it could mean not being afraid to say something to your boss, even though the consequences for you could be dire. The phrase recognises that power relations influence whether we choose to express our thoughts and views openly.

Loss of face. If you’ve always got to look good, it’s very difficult to speak up.

The Spiral of Silence – also called ‘social silencing’ – is an idea David encountered only recently. It’s a concept in political science and mass communication theory put forward by Elisabeth Noelle-Neumann, who argues that fear of isolation, of being excluded or rejected by society because you hold a minority opinion, may stop you from speaking out. She also argues that the media not only shape what ideas are the dominant ones, but also what people perceive to be the dominant ideas, even though that perception may not accord with reality. (Much of the mainstream media is telling us that the British public are united in a determination to leave the EU, for example.)

A related critique of social media – Facebook, for example – is that it encourages people to live in bubbles of confirmation bias, connecting us to people who share the same ideas as ourselves.

Groupthink is a well known term. Perhaps people in a meeting do all think the same way – or is there a sizeable group who think differently and just don’t want to rock the boat?

Last on David’s list was facilitator bias – was he, for example, in a position to bias our thinking?

The questions for the Café

So here were a few barriers that can get in the way of a good conversation, and thus impoverish group creativity and problem solving. David invited us to go into Café mode and talk about how to overcome these problems.

In the promotional text for this meeting, we had asked three questions, and David suggested that perhaps each ‘round’ of the Café might look at these.

  • The first question is, what factors in people’s backgrounds, professional education and culture, lead to them having a ‘blinkered’ view of the range of available opinions and policy decisions, especially at work? How might this be mitigated?
  • Second, when we meet together in groups to decide something in common, to come to a practical decision, what meeting dynamics are getting in the way of us accessing the broadest possible range of opinions and inputs? Could we be running those meetings differently and getting better results?
  • Finally, what are those two questions forgetting to consider?

Big Circle discussion notes

After three rounds of ‘Café table talk’, we rolled the tables out to the edges of the room and created a circle of chairs (there were about forty of us), and continued the conversation in that mode for about 25 minutes. I’m not going to report this blow by blow, but highlight some ideas put forward, as well as comment on the process.

It’s worth pointing out that the dynamics of discussion in the larger group were (as one might expect) very different from in the small groups. Some people said a lot, while about half said nothing at all. For the first nine minutes, about ten people spoke, and all were men. There was a tendency for one person to make a point that was reacted to by another person and then another and so on, in a ‘chain reaction’, even if that meant we drifted off topic. For about five minutes, the tone across the room got quite adversarial. So while the technique of making a big circle does help to gather in what had been thought across the table groups in a Knowledge Café, it can have its demerits or perils.

Meeting management methods. Steve Dale mentioned that at the LGA’s Improvement and Development Agency, there was a manager who used to take his team out on a walk – about ten people – and they talked as they walked. People wondered how practical that was! David Penfold suggested that if they walked and talked in groups of three, then they could stop at the pub and have the whole-group session – a Knowledge Café on legs!

Steve also pointed out that in some meetings – with fixed time and a business agenda – a free-flowing conversation would waste time and get in the way. Various people noted that one could loosen up thinking with a Café-style session or brainstorming, and follow that with a decision-making meeting – preferably after a fallow period for reflection.

Someone outlined a method she finds useful for eliciting a wide range of contributions. Pose an issue and get people to reflect on it individually for a while, in silence; then ‘pair and share’ with one other, to practise articulating your own ideas and also listening to others. Then you can progress to groups of four; then feed back from the groups to the whole assembly. When you are divided into small groups, we noted, the dominant types can only dominate a small group!

Dominance in group situations. Gender dominance or imbalance can affect the dynamic in discussions; so could dominance by age or ethnicity. Clare Parry spoke of occasions when someone from a minority makes a point and it is ignored; then someone from the majority says it, and suddenly it is a fantastic idea. These biases might be subconscious; but a younger person thought that discounting the opinions of younger people could actually be a quite conscious bias, based on the opinion that older people are more likely to know what they are talking about.

Bad paradigm shifts and entrainment. I (Conrad) thought it would be a mistake to think that paradigms always shift in the right direction! An example might be an assumption that information management is something that computer people do… We debated for a while whether this assumption was as widespread as 20 years ago: opinion differed.

Dion Lindsay, in his work around both information and knowledge management, finds that information professionals make a huge assumption that they are the best people to lead an organisation’s efforts in knowledge management. They see a continuum between librarianship, information management and knowledge management – which is not how the rest of the organisation sees things. And that, he said, is an example of entrained thinking (on both sides, perhaps).

Unfortunately, but predictably for this NetIKX crowd, this issue of IM and KM and IT set off a long series of exchanges about the rights and wrongs of managing information in a technology environment, which strayed right off the point – and got quite heated!

Influencing culture from the top down. One table conversation speculated that if a bunch of people at board level have got stuck in a rut with a particular way of doing things, it could be mitigated by bringing in someone with different thinking – like a football team appointing a maverick football manager to shake things up. On the other hand, this could backfire if ‘the old guard’ react by resisting and subverting the ‘outsider’.

An open, learning culture. Stuart Ward argued that organisational culture can be a positive influence on how decisions are made  – if the people at the top visibly promote diverse thinking by asking people for inputs and opinions. Nor should people be penalised for making mistakes, if the result is learning that can be shared to improve future results.

We came to no shared and agreed conclusions – but that’s not what a Knowledge Café does.  Everyone took something different away in their heads.

Survey Results

Naomi Lees, NetIKX Membership Secretary writes:

Thank you to everyone who responded to our NetIKX survey earlier this year. We had some very interesting and useful responses.

Here is a brief overview of the points raised, and what NetIKX plans to do over the next 12 months:

Programme Planning

We had some very useful feedback on the seminar topics you would like to see, especially around the future and value of KIM; as well as practical KIM tools and techniques. You will be pleased to know that we will be covering all these aspects and more in our programme in 2017 and early 2018, so check www.netikx.org/events or https://netikx.wordpress.com/events/ for details of future events.

We also had some other useful suggestions for future seminar topics, which our programme planner has taken away for further cogitation! Watch this space for further updates.

Events outside London

You said that you would like to see more events outside London – we are currently looking at ways we can make this happen. If you are keen to host an event outside London, please get in touch.

Partnering with other KIM Groups

We had some very encouraging feedback on developing partnerships with other KIM groups. You will be pleased to know that we have a KIM Communities event coming up soon. We are always interested in building connections with other KIM groups, so please get in touch if you have any ideas for joint-working.

NetIKX Website

We received several comments on the website and we are really grateful for this feedback. You will be pleased to know that we are currently working on a new website, with lots of the features you have asked for, such as more KIM resources and the ability to make electronic payments.

The survey results can be viewed here: https://www.surveymonkey.com/results/SM-VZXFF523/

 

Information Design, with Conrad Taylor and Ruth Miller

On 26 January 2017, the speakers at the NetIKX meeting were Conrad and Ruth. Conrad has written up the two talks below. A fuller account of his own talk can be found on his Conradiator site at http://www.conradiator.com/kidmm/netikx-infodesign-conrad.html, as he notes below.

Photo David Dickinson

 

For some comments on the meeting, by Claire Parry, see the very end of this report.

Conrad’s Account

The topic of the NetIKX seminar on 26 January 2017 was ‘Information Design – approaches to better communication’. Information Design (ID) is a collection of practices using visual design, clear writing and thinking about human factors, to making information easier to understand – especially information for the general public. Information designers work across a range of types of media, from road signs to government forms, user manuals to transport maps, bank statements to legal contracts, and increasingly information presented through computer interfaces and Web sites.

I was the first speaker, running through a background history and the theoretical underpinnings of ID, and showing examples with a strong visual component. Ruth Miller then took over and focused on the Plain Language side of things. Both Ruth and I have been active around the Information Design Association in the UK (IDA) for 25+ years.

Here, I’m giving only a brief summary of my own presentation; as I had prepared it in written form with illustrations, I’ve thought it best to convert that into a kind of stand-alone essay; you can find it at http://www.conradiator.com/kidmm/netikx-infodesign-conrad.html. Ruth’s contribution, however, is presented below at greater length, as it isn’t represented elsewhere.

Introducing Information Design

In my opening presentation I explained that the awkward label ‘Information Design’ emerged in the late 1970s as a rallying point for a diverse bunch of folk committed to clarity and simplicity in information presentation. That led to the founding of the Information Design Journal, a series of conferences, and organisations such as the IDA. Some people came into this from a graphic design background; some were committed to the simplification of written language. Psychologists, linguists and semioticians have also contributed their insights.

Despite this avowed interdisciplinarity, the ID community has sadly kept aloof from people in information and knowledge management. One of the exceptional people acting as a bridge is Liz Orna, long associated with NetIKX and its predecessor the Aslib IRM Network. In her writing, Liz has long emphasised the important role of ‘information products’ as artefacts designed for conveying knowledge.

Visual examples across the ages

I then conducted a whistle-stop history tour of innovation in making complicated stuff easier to understand through pictorial and typographic means, including:

  • Tables, a surprisingly old way of handling information (reaching way back to Sumeria in about 2500 BCE). My table examples included tide-tables, ‘ready reckoners’, and text in tabular formats.
  • Diagrams/drawings, ranging from more exactingly accurate ones such as anatomical atlases and sea-navigation charts, to line drawings and schematic diagrams which remove unnecessary detail so that they can focus on communicating (for example, how things work).
  • Harry Beck’s London Underground diagram got a special mention, given its iconic status. It is often called a ‘map’ but in reality it is a service network diagram, and this approach to transport information has been copied worldwide.

Harry Beck underground diagram

  • Charts and graphs including Joseph Priestley’s first timeline, William Playfair’s invention of the line and area chart, and Florence Nightingale’s ‘coxcomb diagrams’ for presenting statistics.
  • Data maps, such as John Snow’s 1854 plot of cholera deaths around the Broad Street pump in Soho.
  • Network diagrams as used to represent links between entities or people, or to explain data flows in a software system.

I also mentioned business forms and questionnaires as an important genre, but I left this topic to Ruth who has more experience with these.

Where did Information Design thinking come from?

The above examples, which I illustrated using pictures, illustrate trends and innovations in the presentation of information. Next I looked at how the quest for clear communication became more conscious of itself, more bolstered with theory, and better organised into communities of practice.

This seems to have happened first in improving the clarity of text. In the 1940s, Rudolf Flesch and Robert Gunning proposed some objective ways of measuring the readability of text, by calculations involving the length of sentences and the average number of syllables per word.

Flesch Readability Chart Flesch Readability Chart

In the UK, Sir Ernest Gowers formulated a guide to plain English writing to educate civil servants, culminating in the famous book The Complete Plain Words, which is still in print after six decades and a number of revisions.

In the Second World War, the technical sophistication of weapons plus, in Britain, the need to engage the public in war preparedness seem to have been drivers for innovations in technical documentation and the creation of training materials, and the job description ‘Technical Author’ came into being. As this trend in technical documentation continued in the post-War era, technical communicators organised themselves into associations like the STC and ISTC. In the richer industries such as aerospace, technical documentation also pioneered the use of early WYSIWYG computer systems like Xerox Docomenter and Interleaf for document composition.

In 1943, the UK Medical Research Council formed its Applied Psychology Unit in Cambridge, initially to investigate how to help armed forces personnel understand and cope with information under stresful conditions. Post-war, APU researcher Pat Wright went on to investigate factors in text legibility and comprehension; Don Norman contributed to the establishment of Cognitive Science as a discipline, and helped Apple Computer as its first User Experience Architect.

In 1978, NATO sponsored a conference in the Netherlands about human factors and the design of non-electronic information displays; the papers were published as Information Design in 1984. The Information Design Journal was set up in the aftermath of the event and was then the focus for a number of conferences in the UK. As for the IDA, it was launched in 1991.

Some issues and developments

I rounded off my presentation by touching on three issues which have been woven in and out of Information Design practice down the years:

  • Desktop publishing’, which put typesetting control and on-screen design into the hands of graphic designers, was a powerful enabler for information designers in particular.
  • Understanding the reader remains a challenge for anyone who truly seeks to communicate clearly. It’s dangerous to make assumptions about what will make sense to a user community unless you find out about that community. Today there is growing sophistication in using qualitative research methods and even ethnography to inform more effective writing and design.
  • Prototyping and usability testing – making prototypes is easier than before. Testing them with a sample of people representative of the eventual users can provide very useful insights, as Ruth would later illustrate from her own experience.

I closed my section of the meeting by speculating that the realm of information and knowledge management has hitherto tended to be dominated by librarians and like professionals, who focus on curating and organising collections of information resources. I would like there to be more engagement between this group and those actively engaged in designing and creating the information products which Liz Orna has described as having a central role in conveying knowledge between people.

Liz Orna on the chain of communication

I then handed the meeting over to Ruth.

Ruth Miller on plain language

ruth-millerRuth explained that she did not train to be a plain language communicator; she fell into it and found it a perfect match for her personality. Like many people who work on improving communication, she notices things that are odd or confusing in everyday life, and wonders how they could be organised better. She would describe herself as a Simplifier: someone who looks at information and thinks about how to make it easier for people to understand.

More recently, Ruth has had the experience of teaching English to unaccompanied minors, as a volunteer at a refugee camp in Greece.

Plain language is not new. ‘Let thy speech be short, comprehending much in few words,’ it says in Ecclesiasticus (Sirach) (32:8), which dates from about 200 BCE. From Ptolemaic Egypt, we have a letter from a Minister of Finance to a senior civil servant, saying ‘Apollonius to Zeno, greetings. You did right to send the chickpeas to Memphis. Farewell!’ These quotes are from a 1988 pamphlet called ‘Making it Plain: a plea for plain English in the Civil Service’, with a foreword by Margaret Thatcher.

Thatcher promoted plain language writing. Early in her first government she engaged Derek Rayner, former CEO of Marks and Spencer, to commission a series of reports on efficiency in government, the ‘Rayner Reviews’. One of these, Forms into Shape (1981), analysed the use of forms in the Department of Health and Social Security (DHSS), and recommended the setting up of specialist Forms Units in government departments. Ruth would have more to say about forms design later, from her experience inside one of those units.

Ruth showed an illustration from the horticulture manual Flora, Ceres and Pomona by John Rea, beside an excerpt in which Rea says that he ‘has not inserted any of those notorious lies I have frequently found in books of this subject, but in plain English terms, set down the truth in every particular’. This is the earliest use Ruth has found of the phrase ‘plain English’ – it dates from 1665.

When plain language explanation should be unnecessary!

In many circumstances you shouldn’t need an explanation. Ruth showed a photo of a bathroom tap with a square knob set some centimetres to the right of it, from a British hotel. She couldn’t figure out how to make water come out of it. Evidently she wasn’t alone in this: the hotel had added a sign saying ‘Tilt Taps to Operate’ – which only made matters more confusing (the tap does not tilt, and there is only one of it). ‘Turn knob to operate tap’ would have been better – but even then, it’s an example of information as a prosthesis; had the artefact been better designed in the first place, it would not be necessary to help it with an information crutch.

Ruth also showed a photo of a fine mahogany boardroom table she had encountered at a business meeting. It’s useful to have a table on which to place your bag, so you can unpack the things you need for the meeting. On this table was placed a sign, ‘Please Do Not Put Briefcases on Tables as It Damages the Surface’. Ignoring points of dubious grammar, and the strange capitalisation… isn’t it just daft to provide a table you can’t use as a table?

‘If you go away from this meeting with only one thought,’ said Ruth, ‘it should be: think about the whole situation and challenge the need to explain, however clearly, something that is nonsense in the first place.’

Siegel and Gale experience

After working in government service, Ruth moved to the communication consultancy Siegel and Gale. This was an exciting time when computer technology and laser printers were changing how personalised documents such as utility bills and bank statements could be delivered. Now less ‘computerish’ fonts could be used; layouts could be more sophisticated; type size and boldness could be used for emphasis.

Siegel and Gale caused a stir in the 1990s with their redesign of the British Telecom phone bill. This put summary information on the front page, and more detail on follow-on pages; it used simplified language, and logical grouping of items. As a result the number of customer billing enquiries fell by 25%. BT also found that customers paid bills more promptly.

Siegel and Gale once won the Design Effectiveness Awards with a humble Royal Mail redirection form. Before the redesign, that form had an 87% error rate when customers filled it in, costing Royal Mail about £10,000 a week. The redesigned form paid for itself in just 17 days!

Siegel and Gale also moved into the redesign of bank statements. For Barclays, they changed the technical language of ‘Debits’ and ‘Credits’ to ‘Money out’ and ‘Money in’. In other words, name things the way people think about them, in the language they are used to.

Conrad had mentioned ethnographic research in passing; Ruth refers to watching people use things. Once she had worked on a booklet for TalkTalk, to help people set up an Internet router at home. They then embarked on research to see how effective the booklet design had been. What had really helped was the inclusion of photos: this is what’s in the box, this is what it will look like when you have set it up, and so on.

This project did have its moments of comedy. There was a particular router which doubled as a picture frame: you could slip a photo into a slot on the front of it to ‘domesticate’ the thing. Ruth overheard someone telling a friend that she had just about set her router up, and had managed pretty well – but she wasn’t quite finished; now she had to find a photo! (Perhaps they should have added the word ‘optional’?)

Plain language: campaigns for awareness

The case for plain language use was championed within the public sector in Britain, Australia and Canada. In the USA, the lead was taken more by private business. In the US financial sector, they wanted people to understand things like investing. The US Securities and Exchange Commission pressed for consumer agreements to be written in language that people signing up to them would understand.

In the UK, the Plain English Campaign deserves credit for raising awareness and getting the bandwagon rolling. They were and still are a force for good. They were also very clever at marketing. Doesn’t ‘Plain English Campaign’ sounds like a publicly-funded body, or an NGO? In fact, they are a commercial business.

The ‘Crystal Mark’, which the PEC invented, was a brilliant idea and a money-spinner too. Many companies believed that getting a Crystal Mark on one of their documents was a mark of quality, like a kite mark. If you saw a Crystal Mark, the implication was, no-one should have a problem understanding it. But that isn’t necessarily true, partly because PEC is financially motivated to award Crystal Marks, but also because their focus is far too narrowly set on language construction. An over-long and complicated set of Terms and Conditions, set in small and hard-to-read type, would still get a Crystal Mark from the PEC – if they deemed the language to be ‘plain’.

Recent experience

More recently, Ruth has worked freelance, and she showed some small examples of projects which have brought her pleasure. She has enjoyed working with Standard Life, simplifying their policy documents, and materials about investments and pensions. What got them walking along the road to simplification was a letter from a customer who complained:

My degree is only in mechanical engineering. I can understand differential calculus, I can design all the machinery for a sewage treatment works, I can design you a bridge but I cannot understand what my policy is worth.

In the redesigns, they introduced summaries, and contextual notes, and made use of two-colour print. She added: these may be humble documents; but when you do them well, it can actually get noticed, and besides, it improves the quality of people’s lives.

Form and function: lessons from the DHSS experience

Ruth has long enjoyed doing battle with forms. When she was a civil servant, the language used in forms was from the 1950s, and they were very difficult to fill in; no wonder that the launch of the Campaign for Plain English was marked with shredding forms in Trafalgar Square!

Ruth once worked in a unit in a government department (the DHSS); this team had a brief to radically improve such forms. The team included writers and designers, and had a decent budget for research and testing too. They had input from Pat Wright, the applied psychologist Conrad had mentioned, and the Adult Literacy Basic Skills Unit; RNIB providing input about impaired vision. They investigated what trips people up when they try to fill out a form – type size, vocabulary, question sequence, whatever.

The unit was supposed to redesign 150 forms and in the first two years they managed about eight! However, that seemingly slow progress was because the research and testing and analysis was very ‘front loaded’ (it paid dividends later).

With forms, there is sometimes a trade-off between length and complexity. Some forms in her collection are booklets of 28 or even 36 pages! People appear to prefer a long but easy to understand form. Reorganising questions so all you have to do is tick a box is helpful – but it takes space. Clear signposting to take you to the next relevant part of the form is good – and also takes space!

Many forms have an introductory paragraph which tells people how to fill in the form (write clearly, write in block capitals, use a black pen…). However, research shows that hardly anyone reads that bit. In any case, people’s behaviour is not changed by such prompts, so why bother?

If you want to provide guidance as to how to fill out specific parts of a form, provide it at the ‘point of use’ – embed your explanations, and any necessary definitions, right in the questions themselves. An example might be the question: ‘Do you have a partner?’ Then you can clarify with something like ‘By partner we mean someone you are married to, or live with as if you were married to them’.

It’s useful to establish what graphic designers call the grid – a set of rules about how space is to be used on the page to lay out the form. For example, the questions and explanations might be placed in a leftmost column, while the space for answers might span the next two columns. Ruth showed some examples of gridless and chaotic forms, later redesigned according to a grid.

Once upon a time, forms would be made up only of type, plus solid or dotted lines (for example, in letterpress printing of the early 20th century). That has created a set of norms which we don’t have to feel bound to these days. Today, lithographic printing permits the use of tints (printing a light shade of a colour by using a pattern of dots that are too small to be individually distinguished). Tints can help to distinguish which parts of the form are for writing into (with a white plain background) from those parts which ask the questions and provide help (where type is set on a tinted background). A second print colour, if affordable, can also be helpful.

Testing also found that it was very helpful to re-jig questions so they could be answered with tick-boxes. Boxes which are used to determine a ‘yes/no’ condition should follow a ‘yes’ kind of question, as in ‘Tick this box if you are married’.

Some such yes/no questions, if answered in the affirmative, will lead to others. Perhaps controversially, Ruth’s team in the DHSS reversed the usual order so that the ‘No’ tick box came before the ‘Yes’ one: this helped them to lay out the subsidiary questions more clearly. (In an online form, of course, such subsidiary questions can be made to disappear automagically if the answer is ‘No’.)

Ruth mentioned ‘hairy boxes’ – those pesky ones with vertical separators that are intended to guide you to place one letter in each demarcated space. They’ve proved to be a complete disaster. Someone mentioned the US Immigration form for filling out before the plane lands, which has this feature.

That’s not the only problem with that US Immigration form, remarked Ruth. It’s very bad at conveying the relationship between question and response space: people often assume that the space for the answer is the one below the question. Only when they come to the last question do they find that the questions are set below the spaces for answering them.

Signposting is important in complex forms, helping people to skip questions that don’t apply to them (‘If you answered ‘No’ here, go forward to Section C’).

For the benefits claim forms, the DHSS team realised that many claimants don’t have the fine motor skills to write small, so they made more space for the answers – and left a substantially larger space for the signature.

Many forms end at that point, but the DHSS team added a section to tell the form-filler what to do now, what supporting documents to attach, and what would happen subsequently. It helped manage expectations and gave people a sense of the timescale according to which DHSS would respond.

Quick exercise

Ruth got us to work in pairs on an exercise based on the competition which the Plain English Campaign used to set in the pages of the Daily Express. She had multiple copies of three or four real life examples of gobbledygook and invited us to simplify the messages; we wrote our alternatives on the small A4 white-boards which she uses in teaching, called ‘show-me’ boards in the trade, so we could hold them up to compare alternatives across the room.

One of the original offerings read: ‘We would advise that attached herewith is the entry form, which has been duly completed, and would further advise that we should be grateful if you would give consideration to the various different documents to which we have made reference.

One suggested rewording was ‘Please note the documents referred to in this form’; another was ‘Here is the entry form; please note the referenced documents.’ PEC’s original winner was ‘Attached is the completed entry form. Please consider the documents referred to’ – though she personally preferred that ‘Here is’ version. We went through another couple of examples too.

Problem areas which people noted include:

  • use of the subjunctive mood in verbs
  • use of the passive tense in verbs
  • long sentences with multiple clauses

In the wording of contracts, it may be unclear who is meant by ‘we’ and ‘you’  in something  the customer is supposed to sign. Jane Teather said that the company had commissioned the form, they should be ‘we’ and the customer ‘you’.

Something else that occupied us for a few minutes was the changing norms around the use of ‘shall’ versus ‘will’.

Four Cs

Ruth offered four Cs as ideals — Clear, Consistent, Concise and Compelling.

The ‘consistency’ ideal suggests that if you set up a convention in the communication – such as who is ‘we’ and who is ‘you’ in a text, and what something is to be called – you should stick to it. This is defiance of a literary concept of ‘elegant variation’, the idea whereby you ransack the thesaurus in a hunt for synonyms, rather than re-use the original term; that may make for a fine essay, but for these purposes, bin it. Once you have called a spade a spade, stick to it.

In written communications with a broad public, subsidiary clauses and relative clauses are probably confusing and best broken out into separate sentences, said Ruth. Likewise she pronounced a fatwa against parenthesis: anything in brackets or between en dashes. They are not bad English by any means, but you risk confusing the wider audience. In any case, stuff in parenthesis is at risk of being thought as of lesser importance (though you might move ‘bracketed bits’ to the end, she said, which is what I am doing now).

A question was raised, in response to a redesign Ruth showed of transforming a bullet list into a tabular layout, about the implications of using tabular data online for accessibility for blind computer users. My own feeling, confirmed after discussion with others, is that an HTML table will ‘linearise’ nicely when reduced to readable text e.g. for voice synthesis presentation: first the header row will be read, then the first body row, then the next, and so on. However, this isn’t good enough. A table is an inherently visual device which allows the reader pay selective attention to rows and columns. Really, the information should be completely re-organised to make an audio presentation meaningful to a vision-impaired person. (Think about how you would present the information on radio!)

Ruth’s overall approach to making textual information more accessible includes these tips:

  • Look to patterns in the text which can be exploited, for example by reorganising material into bullet lists. If Ruth sees a series of clauses linked by ‘and’, she considers bullet points as an alternative.
  • If a list of bullet points gets excessively long, analyse to see if it can be broken into two shorter lists.
  • Break up large slabs of text; Ruth avoids paragraphs which are more than three or four lines long.

Four mantras

Here are four other thoughts which Ruth offered in the course of the afternoon:

  • ‘Nonsense in plain language is still nonsense!’ – as someone in Standard Life had remarked.
  • Rob Eagleton, an Australian practitioner in plain English: ‘It’s the writer’s responsibility to be clear, not the reader’s responsibility to understand.’
  • ‘Clear writing stems from clear thinking.’
  • ‘Simplicity isn’t simple to do.’ Communicating well is an art, a craft, a skill, and it is not that simple to do well. Because writing is something everybody does daily, it’s tempting to think that everone can do it well. Testing reveals this is not true! There is scope here for learning and for training.

Reactions

We would be interested to hear people’s reactions to this topic. Meanwhile here are some thoughts posted by NetIKX committee member Claire Parry:

  • Given the constraints of a half-day seminar, we inevitably only scratched the surface of this vast topic. Several participants commented afterwards that they would have liked to discuss design issues specific to online forms – maybe a topic for a future seminar?
  • I also wondered how we could take the discussion forward to apply information design principles to the Internet of Things, the need for documentation to be readable by both humans and machines, the ‘mobile-first’ philosophy and the move towards embedding user manuals in products.
  • As these are all areas where there is a clear need for interdisciplinary collaboration, it was encouraging to see participants from both information management and technical communications backgrounds contributing to the seminar and acknowledging our common aims. In an era of ‘alternative facts’ and ‘fake news’, clear and accurate communication is more important than ever.

Evidence-based Decision Making

Conrad Taylor writes:

On Thursday 3rd of November 2016, about thirty people gathered at the British Dental Association to discuss the topic of ‘Evidence-Based Decision Making’, in a workshop-like session led by Steve Dale, who practises as an independent consultant as ‘Collabor8Now’.

The NetIKX difference

Before I give readers an account of the meeting, and some thinking about it, I’ll describe a few things that often make NetIKX meetings ‘different’ from those in other organisations devoted to information and knowledge management. This meeting was a good expression of those differences.

For one thing, NetIKX is not dominated by academics – most who come to the meetings work with knowledge and information in government departments, business corporations, agencies and the like. That majority is then seasoned with a sprinkling of consultants who work in those kinds of business environment.

Secondly, the pattern of most NetIKX meetings is to have one or two thought-provoking presentations, followed by discussions or exercises in ‘table groups’ (called syndicate sessions). This on average occupies a third of the time, followed by pooling of ideas in a brief plenary. That’s quite different from the pattern of lecture plus brief Q&A encountered at so many other organisations’ meetings.

When you combine those two features – the nature of the audience and the participatory table-group engagement – the Network for Information and Knowledge Exchange does live up to its ‘network’ title pretty well. The way Steve organised this meeting, with a heavier than usual slant towards table-group activity, made the most of this opportunity for encounter and exchange.

Setting the scene

Steve explained that he had already delivered this ‘package’ in other contexts, including for the Knowledge and Innovation Network (KIN) associated with Warwick Business School (http://www.ki-network.org/jm/index.php). We know Steve is also interested in the idea of ‘gamifying’ processes: he hoped the work he had prepared for us would be fun. There would even be an element of competition between the five tables, with a prize at stake.

Steve started with a proposition: ‘Decisions should always be based on a combination of critical thinking and the best available evidence’. Also, he offered us a dictionary definition of Evidence, namely, ‘the available body of facts or information indicating whether a belief or proposition is true or valid’.

The first proposition, of course, begs the question about what you consider to be the best available evidence – whose opinions you trust, for example. That, it turned out, was the question at the heart of Steve’s second exercise for us.

As for that ‘definition’, I have my doubts. It could be interpreted as saying that we start with a ‘belief or proposition’, and then stack information around it to support that point of view. That may be how politics and tabloid journalism works, but I am more comfortable with scientific investigation.

There are at least two ways in which science looks at evidence. If an explanatory hypothesis is being tested, the experiment is framed in such a way that evidence from it may overthrow the hypothesis, forcing us to modify it. And very often, before there is yet a basis for confidently putting forth a hypothesis, ‘evidence’ in the form of observed facts or measurements, and even apparent correlations, is worth taking note of anyway: this then constitutes something that requires explaining. Two cases in point would be field notebooks in biology and series measurements in meteorology.

Similarly, ‘evidence’ in a forensic investigation should float free of argument, and may support any number of causal constructions (unless you are trying to fit somebody up). That’s what makes detective fiction fun to read.

Certainly, in our complex world, we do need the best possible evidence, but it is often far from easy to determine just what that is, let alone how to interpret it. I shall end this piece with a few personal thoughts about that.

Correlation and causation

Steve’s following slides explored what happens when you confuse ‘correlation’ (things, especially trends, which happen within the same context of time and space) with ‘causation’. For example: just as Internet Explorer was losing market share, there was a parallel decline in the murder rate; from the start of the Industrial Revolution, average global temperatures have been trending upwards, closely correlated with a decline in piracy on the high seas. Did the former cause the latter, or the other way round?

Those, of course, are deliberately silly examples. But often, correlation may usefully give us a hint about where to look for a causal mechanism. The global warming trend has been found (after much data collection at an observatory in Hawaii) to correlate with an increase in the proportion of carbon dioxide in the atmosphere. That observation spurred research into the ‘greenhouse gas’ effect, helping us to understand the dynamics of climate change. As for the field of medicine, where causative proof is hard to nail down, sometimes correlation alone is deemed convincing enough to guide action: thus NICE recommends donepezil as a palliative treatment for Alzheimer’s, though its precise mechanism of action is unproven.

Data visualisation

Steve then moved the focus on to one particular way in which information claiming to be ‘evidence’ is shoved at us these days – data visualisation, which we may define as the use of graphical images (charts, graphs, data maps) to present data to an audience. He mentioned a project called Seeing Data, a collaboration between British and Norwegian universities, which is exploring the role of data visualisations in society (see http://seeingdata.org). According to this project, the key skills we need to work with data visualisations are…

  • language skills;
  • mathematical and statistical skills, including a familiarity with chart types and how to interpret them;
  • computer skills, for those cases where the visualisation is an interactive one;
  • and skills in critical thinking, such as those that may lead us to question the assumptions, or detect a ‘spin’ being put on the facts.

Steve showed a few visualisations that may require an effort to understand, including the London Underground ‘Tube map’ (more properly, a network diagram). Some people, said Steve, have problems using this to get from one place to another. Actually, a geographically accurate map of the Underground looks like a dense tangle of spaghetti at the centre with dangling strands at the periphery. Harry Beck’s famous diagram, much imitated by other transport networks, is simplified and distorted to focus attention on the ‘lines’ and the stations, especially those that serve as interconnectors. But it is certainly not intended as a guide to direction or distance: using it to plan a walking tour would be a big mistake.

One might therefore say that effective understanding of a diagram requires experience of that diagram type and its conventions: a sub-type of the factor (2) in the list above. Charts, graphs, diagrams and data maps are highly formalised semiotic expressions. Partly because of that formalism, but also because many visualisations are designed to support fast expert analysis, we would be wrong to expect every visualisation to be understood by just anyone. Even the experienced technicians who recently did my echocardiogram defer to the consultant cardiologist, when it comes to interpreting the visualised data.

Critical thinking in focus

For our first exercise, Steve wanted us to apply critical thinking to ten given situations, set out in a document, copies of which were shared with each table group. Five of these puzzlers were illustrated with a graphic. To prime us, he talked through a number of images. In one case, a chart indicating changing quantities over time, the vertical axis representing the quantities did not start at zero (a common phenomenon): it gave the impression of a large change over time, which wasn’t warranted by the data. A map of referendum voting patterns across the counties and regions of Scotland could skew one’s impressions owing to the vast area of the sparsely populated Highlands, Galloway etc., compared to the small but densely settled zones of Glasgow and Lanarkshire. Other examples illustrated problems of sampling biases.

The exercises were quite fun. One of my favourites, and it did bamboozle me, showed a side elevation and plan picture of a twin-engined Douglas Dakota cargo plane marked with loads of red dots. The accompanying text said that the RAF had responded to losses of their planes to German anti-aircraft fire by examining the ones which got back, to see where the damage had occurred. They had aggregated the data (that is what the red dots indicated) and analysed the diagram to determine where to apply protective armour. What we were supposed to notice was that clearly, as those planes had managed to return, being struck in those marked places was usually survivable. The fact that no such dots showed up on the cockpit or either engine was because strikes in those locations tended to be fatal.

I won’t go through all of the examples in the exercise. In one case we were supposed to analyse trends in death by firearms year after year, but the y axis had been inverted, turning the curve upside down. In another case, the y axis was organised by a geometric progression rather than a linear one (each extra increment represented a doubling of quantity). That was quite a weird example, but bear in mind that logarithmic scales are common in some scientific graphs – and are used appropriately there, and understood by their intended audience.

It was fun working with the team on my table. We were pretty good at identifying, in some cases, multiple points of criticism. That rather undermined our score, because Steve decreed there should be only one criticism per example, and his answers had to be regarded as the right ones for the purpose of the competition! But the real benefit was the process, analysis and discussion.

Whose evidence do you trust?

The second exercise painted the scenario of an Italian company developing software for the retail sector: the concern was to know whether introducing performance-related pay would improve productivity in the engineering teams.

Steve had concocted eight forms of ‘evidence’ from various sources: a senior external consultant who said ‘no, you need to develop the leadership skills of supervisors’; a trusted friend who pointed to a study from the London School of Economics; an article in the Financial Times; a Harvard study of productivity amongst Chinese mineworkers; various responses to the question posted on a Human Resources discussion forum; what the HR director thinks. There were also two bits of evidence closer to the company: data about discrepancies in performance between the existing teams, which seemed to indicate that the most productive teams were those with a high proportion of senior engineers; and information that the union representing most of the engineers had resisted previous attempts at performance-related pay differentials.

We were supposed to rank these inputs on the basis of how trustworthy we thought their sources to be; my table found it quite hard to avoid considering how relevant the offered evidence might be. For example, we didn’t think the circumstances of Italian software engineers and Chinese mineworkers to be remotely comparable. I found it interesting how many of us tended to regard people like consultants and top management as trustworthy, whereas, when employees’ union was mentioned, people said, ‘Oh, they’ll be biased’. There is obviously a lot of subjectivity involved in evaluating sources.

If one thinks more broadly of evaluating the relevance and validity of evidence on offer, it appears to have at least two components: the degree to which the experience or model offered has both internal coherence and similarity to the situation for which a decision is being sought; and evaluation of the ‘messenger’ bringing those ideas. Thus there is a danger that useful evidence might be disregarded because of bias against the source.

Personal reflections

This was certainly a lively and highly engaged meeting, and Steve must be congratulated for how he structured the ‘table work’. The tasks we were set may have been artificial, and I thought some of the conclusions reached could be challenged, but it made for a lot of discussion, which indeed continued when we broke into unstructured networking afterwards, with drinks and snacks.

Clearly, it is valuable to learn to be critical of data visualisations, especially now they have become so fashionable. Data visualisations are often poor because their creators have not thought properly about what is to be communicated, and to what kind of audience, or haven’t considered how these highly abstracted and formal representations may be misunderstood. (And then, of course, there’s the possibility that the purpose is deliberately to mislead!)

There is a whole different (and more political) tack that we could have explored. This was the last NetIKX meeting of 2016, a year in which we have witnessed some quite outrageous distortions of the truth around the so-called ‘Brexit’ referendum, to name but one field of discourse. More generally, the media have been guilty of over-simplified representations of many very complex issues.

This was also the year in which Michael Gove exclaimed that we’d had enough of the opinions of experts – the kind of attitude that doesn’t bode well for the prospect of ‘evidence-based government’.

In respect of Evidence-Based Decision Making, I think that to rise to urgent environmental, social, developmental and political challenges, we definitely need the best evidence and predictive modelling that we can muster. And whatever respect we as citizens have for our own intelligence, it is hubris to think that we can make sense of many of these hyper-complex situations on our own without the help of experts. But can we trust them?

The nature of that expert knowledge, how we engage with the experts and they with us, and how we apply critical thinking in respect of expert opinion – these are worthy topics for any knowledge and information management network, and not something that can be dealt with in an afternoon.

At the meeting, Dion Lindsay spoke up to propose that NetIKX might usefully find a platform or method for ongoing and extended discussions between meetings (an email list such as the KIDMM community uses is one such option, but there may be better Web-based ones). The NetIKX committee is happy to look into this – so I guess we should start looking for evidence on which to base a decision!

Blog for September 2016 Seminar: Connecting Knowledge Communities: Approaches to Professional Development

Conrad Taylor writes:

The September 2016 meeting of NetIKX was introduced by David Penfold. He explained that at this time in 2015, the NetIKX meeting about ‘connecting communities’ had heard from various organisations in the knowledge and information management space. This year, the decision to focus the meeting on training and development had been partly influenced by a plea at an ISKO UK meeting for more thinking about these topics.

All our speakers had interpreted the meeting topic as being about Continuous Professional Development (CPD). There were two presentations, followed in the usual NetIKX pattern by discussion in table groups. The first presentation was given by Luke Stevens-Burt, who is Head of Business Development (Member Services) at the Chartered Institute of Library and Information Professionals.

CILIP’s Professional Knowledge and Skills Base

Luke explained that CILIP accredits degree programmes at universities, and registers and certifies members though chartership and fellowship, but that their main support for the development of its members is delivered through CPD, which he defined as ‘intentionally developing the knowledge, skills and personal knowledge needed to perform professional responsibilities’.  In the past, CPD had been conceptualised as a formal training process, but there has been a shift towards understanding informal experiences and exposure to ideas as being as important, if not more so.

As a person’s work experience develops, and the world of work changes, CPD helps to bolster adaptability. The resources available for learning are diversifying, including MOOCs, journals, seminars and conferences and meetings, even informal conversations over coffee.

Central to CILIP’s support for CPD is something they call the ‘PKSB’, which stands for Professional Knowledge and Skills Base. The conceptual diagram for this – rather difficult to read because of unfortunate low-contrast colour choices – Luke called the ‘wagon wheel’. At the hub, the diagram places ‘Ethics and Values’. Radiating from this are eight spokes representing aspects of Professional Expertise, and four more spokes represent Generic Skills. Around these the diagram portrays a ‘rim’ representing the wider context of the library, information and knowledge sector, and finally an outer ‘tyre’ of an even wider context, to do with the employing organisation and its environment.

The eight headings for ‘Professional Expertise’ are: organising knowledge and information; knowledge and information management; exploiting knowledge and information; research skills; information governance and compliance; records management and archiving; collection management and development; and so-called ‘literacies’ and learning.

The more generic skillsets, which can be found in many professions, were identified as use of computers and communication, leadership and advocacy, strategic planning, and a sundry collection around customer service design and marketing.

A system for self-assessment

Fundamentally, it seems, the PKSB toolkit is a system, using which a person can make a self-assessment of their level of understanding or skill in each of these areas (and subsidiary sets under these, totalling about a hundred in all), based on a ranking between zero for ‘no understanding’ and four for the highest level of expertise.

In using the PKSB, CILIP members are supposed to define what level they are at in each skill area that’s relevant to their current job, and what level they would ideally like to attain, and add an explanatory comment. For example, a person might decide that they only score a basic ‘1’ at using classification schemes and taxonomies, but would like to make progress towards level ‘2’. An alternative use of PKSB could be for career planning, related not to your current job, but to one into which you would like to progress, which might require an upgrading of skills.

Apparently, this PKSB system is used by CILIP in deciding whether to register someone as a member, for example as a chartered member. In this case, the self-assessment is only one step, because one must also submit a portfolio, explaining how you have gained your skills, and this will go before an Assessor.

Thereafter, the PKSB is purely a self-assessment tool so that members can monitor their progress and design a CPD path for themselves. It is for use by CILIP members only, though it is also used as the framework for deciding whether university courses meet the standard at which they can be accredited by CILIP.

Until mid 2016, CILIP members used the PKSB by printing out a set of forms and maintaining them manually. The recent developments do not fundamentally change the system, but make it available as an online interactive system with an app-like interface, usable from a computer, tablet or smartphone. Much of the rest of Luke’s presentation consisted of a live demonstration of the online PKSB interface and facilities, for example showing how it can generate summary reports.

Finally, Luke touched briefly on the resources CILIP can directly provide to support professional development. Within CILIP there are a number of member networks. Some are regionally based, and some are special interest groups – such as the School Libraries Group, the Multimedia Information & Technology group, and the UK eInformation Group (UKeIG). CILIP also plans to launch a new SIG in January 2017 for knowledge and information management, as a revamp of the existing Information Services Group (it will be interesting to see whether this new body will be prepared to collaborate with others in the field, such as NetIKX, ISKO, etc).

CILIP also maintains a Virtual Learning Environment, with online modular courses, about which it would have been nice to hear more; and publishes a members’ magazine (Update), e-bulletins, and various journals, some of which are in print form and some online.

CPD in Government

The second presentation was given by Christopher Reeves and Karen Thwaites, who both work for the Department for Education – Christopher on the records management side, and Karen as a knowledge and information manager with a training role. Additionally, Christopher is on the working group for the Government Knowledge and Information Management (GKIM) Skills Framework, which was the topic of their talk.

The Knowledge and Information Management (KIM) profession has been recognised by the UK government only since the turn of the century, though there have been many jobs in the civil service with aspects of KIM within them, such as librarians and managers of information rights, and records managers. Karen displayed an ‘onion diagram’ showing a core set of KIM roles, surrounded by allied roles such as specialists in geospatial information or data scientists, and an outer rim of allied professions.

The Civil Service Reform Plan, published in June 2012, stated that civil servants should operate as ‘digital by default’, with a set of skills transferrable between the public and private sector. People with KIM roles have become more prominent lately; in the field of records management, the Hillsborough Enquiry played a role in raising a more general level of awareness, as has the current independent enquiry into child sexual abuse.

The GKIM Framework working group

A working group for the GKIM Skills Framework was announced in 2015 by Stephen Mathern as Head of Profession, and gathered under the chairmanship of David Elder to review an existing Framework and propose revisions. The volunteer participants in the group, which included Christopher, represented a range of grades and a variety of KIM roles, across a broad spectrum of departments. The new Framework was launched at a conference in 2016.

The process started with a survey of KIM colleagues, via the departmental Heads of Profession. The results indicated that the existing Framework was seen as too rigid, not user-friendly, with complex language and jargon, and not accessible.

Putting together a plan of action, the working group resolved that the replacement Framework should be flexible, able to fit the profession as it evolves. However, the three main skill areas were retained as definitions: these are (a) abilities to use, evaluate and exploit knowledge and information; (b) abilities to acquire, manage and organise knowledge and information; and (c) information governance skills.

The group resolved to define a ‘foundation level’ for KIM skills, appropriate for juniors, and those outside the profession itself who nevertheless need better information and knowledge handling skills. Because people need to be able to benchmark their skills and performance, a self-assessment tool was recommended (showing parallels with what CILIP have done with their PKSB). Finally, the working group was asked to gather examples of good practice and competency within KIM roles.

Six core KIM-professional roles were identified as existing in all departments (and Christopher returned to the ‘onion diagram’ to display these) – they were, the Information Managers, Records Managers, Information Rights Officers, Knowledge Managers, Information Architects and Librarians. The working group members divided up responsibility for gathering examples of good practice for each of these roles, at all levels.

The draft Framework proposals were then circulated to the departmental Heads of Profession and widely consulted on in other ways, and the working group asked for opinions about whether they had managed to meet the needs expressed by the previous survey. The feedback was overwhelmingly positive, but did lead to some minor amendments being made.

GKIM launch and implementation

The new GKIM Framework was officially launched at the 2016 GKIM Conference. The launch was actively promoted to the departmental Heads of Profession, and Civil Service Learning weighed in by enabling a Web presence for the GKIM materials. (It later emerged in discussion that the Framework documentation consists of one over-arching document, and there are add-ons with more detail about each of the core KIM professions identified.)

Karen closed the presentation with a brief look at the Department for Education as a case study. Within the DfE, senior KIM professionals now have a good awareness of the Framework and its supporting documentation, and are committed to rolling it out to departmental colleagues.

The profile of KIM will be promoted through a ‘KIM Learning Month’ (March 2017), and a stand at the DfE departmental ‘fair’ event in October 2016. The KIM strategy will also be linked to DFE’s performance management objectives, and the Permanent Secretary’s Transformation Programme, within which knowledge management has a critical role to play.

Q&A about GKIM

There were questions asked about whether the slide-set would be available for NetIKX members to peruse later (yes, they will be posted in the Members’ Area), also how accessible the GKIM Framework documents were . The answer to this was that the Framework can be downloaded from http://www.cilip.org.uk/government-information-group/working-government/gkim-skills-framework.  David Penfold reported that the July/August 2016 issue of CILIP Update includes an interview with David Elder about the GKIM Skills Framework.

Table group discussions

I confess that my memory of the table group discussions at this meeting are a bit vague. A flip chart was available during the tea break, on which people could write suggestions for discussion questions, and four were written up, though I cannot remember them in detail, even though I contributed one! The arrangement whereby one question was assigned to each of four table groups was not to my liking: I thought several questions were worth talking about, and the division seemed artificial.

One of the table groups looked at how KIM skills should feature in everyone’s development, not just that of ‘information professionals’ – at least, at the level of promoting a core awareness of the issues. An example might be that everyone should have an awareness about information governance.

The point was made that the language around ‘knowledge’ and ‘skillsets’ is too limiting. You could pick up knowledge and maybe skills by attending some workshops and getting a CPD certificate; but organisations need employees to have appropriate behaviours and values around information and knowledge. I suppose examples could be things like habitually paying attention to information security, or sharing knowledge appropriately with other sections rather than hoarding it.

One of the tables (where I was) had explored a number of topics and not limited to professional development, but looking also at general intellectual development in society at large. There had already been mention of basic skills around information and knowledge, and we considered extended definitions of ‘literacy’, such as ‘information literacy’, and in particular the ability to evaluate information sources as to accuracy, relevance and trustworthiness. 2016 seems to have brought some very low points for poor quality and misleading information, in politics and the media particularly. I personally would like to see more critical thinking taught even in childhood.

There was discussion about how some people need only perhaps a basic ‘awareness’ of KIM issues, plus maybe knowledge about whom to approach for further help. Christopher said that at the foundation level of the GKIM Framework, they do talk about ‘Information Awareness’.

With the government moving in the direction of putting pressure on businesses to take on apprentices, it may be apposite to think about what a KIM apprenticeship might look like, perhaps along the lines of the ‘management apprenticeship’ scheme being developed by the Chartered Management Institute.

[Apologies to Conrad and to all readers for the delay in uploading this report.]

Blog for July 2016 Seminar: Understanding Networks

Conrad Taylor writes:

The 80th meeting of the Network for Information and Knowledge Exchange (NetIKX) took place on 14 July 2016 on the topic ‘Understanding Networks’ and was addressed by Drew Mackie and David Wilcox, who also took us through some short exercises. The meeting was chaired by Steve Dale, who has worked with Drew and David on a number of projects.

Drew has researched around network analysis. David’s background is as a journalist (Evening Standard) and he has tried to give people a voice within regeneration and urban development issues. They exercise their joined skills typically in projects for community development and social service strengthening.

In my account of the meeting, I do not exactly follow the order in which the points were made. I also offer my own observations. Where those deviate significantly from the narrative, I’ll signal that in indented italics, as here.

The idea of networks
The concept of a network has many possible applications, such as computer networks, but Drew said our focus would be networks in general and how they can be represented visually and thus analysed. Visualisations appeal greatly to Drew, who is by background an architect and illustrator.

There are various ways in which the nature of an organisation or a community can be expressed, e.g. through stories. Network thinking is a more structural approach. In network representation, one typically has some form of blob which represents an entity (such as a person, department, organisation), and lines are drawn between blobs to show that a relationship exists between the entities on either end.

Mindmap diagrams and ‘organograms’ are forms of network diagrams representing hierarchical set-ups, designed to limit the number and kind of connections possible. Others networks are more freeform.

Hierarchical organisation is just one way in which networks can be constrained. Other examples of constrained networks: connections between components in an electronic circuit are anything but random. You cannot travel on the Underground between King’s Cross and Seven Sisters without passing through Finsbury Park. Connections may also be strongly typed: the connectors in a genealogy diagram may indicate ‘was married to’ or ‘was the child of’, and some connections are not possible – you can’t be the mother of your uncle, for example.

‘Anything that can be drawn as a set of nodes and connections is a network,’ said Drew. The nodes could be people – they could be ideas. For the purposes of this workshop, we considered networks where the nodes are people, organisations and institutions: while not being accidental or random, such networks are not particularly constrained.

People who work with networks
Drew identified four kinds of people who may work with networks. These roles are not mutually exclusive and can overlap.

Network Thinkers understand the power of thinking in terms of networks and promote that view, usually applying it to their particular field, such as management or urban design. In economics, he mentioned author Paul Ormerod, who is a visiting professor at the UCL Centre for Decision Making Uncertainty.

Network Thinkers recognise that networks may have been designed for a purpose (‘intentional networks’), or may emerge from a variety of connections and purposes ‘unintentional networks’; the latter have patterns which evolve and change over time.

Network Analysts are probably those most likely to work with formally diagrammed representations of networks. They survey networks to figure out which nodes are more central, which are more on the periphery. For example, someone may not themselves have many links, but they may link key clusters within the overall network and thus play a central role.

For a simple network with up to about 20 nodes it isn’t too difficult to spot these characteristics in a network diagram, but when the diagrams get more complex it is a good idea to use software which not only draws a representation, but can also perform mathematical analyses (as described more below).

Network Builders help networks grow by creating and strengthening connections between other people, not just their own. Often these connections are between people (or organisations) already connected to the ‘builder’, who might also be described as a broker or bridge-builder. In the kind of community building work that David and Drew do, these people are out there in the community and serve a valuable function.

Networkers, Drew defined as people who are trying to build their own network. They may call their contacts ‘a network’, but more properly it is a list of their direct contacts.

Uses of network theory and analysis
Drew mentioned a number of applications for network thinking.

Organisations, partnerships. A prominent use is in management of organisations, e.g. creating networks to optimise the flows of knowledge and information. A more expanded but similar use is to facilitate partnership working between organisations, communities and individuals: this is a major focus of the work which Drew and David do, and Drew promised to give us examples.

Life transitions. Within a project for the Centre for Ageing Better, they are deploying network analysis with a time dimension, showing how a person’s networks of support and friendship and engagement can change as they age. In the example he showed us later, a fictitious aggregated persona had her network connections changed as her husband retired, then died; she compensated for this by joining activity groups in the community, but later her ill health prevented her from attending them. Also changing over time was her relationship to agencies and individuals in the health service.

Space design. As an architect, Drew notes that network theory can be used in urban design, to identify those places that are most central to the structure of and life in a city. Epidemiology uses network theory to understand how infectious diseases spread, and behaviours which have positive or negative health consequences (from jogging to alcoholism).

Military doctrine is defined by NATO as ‘fundamental principles by which military forces guide their actions in support of objectives… authoritative but [requiring] judgment in application.’ Drew said that the US military now talks about ‘fighting networks with networks’. In the US Military Academy at West Point, Virginia there is now a Network Science Center, a multidisciplinary research project for representing and understanding physical, biological and social phenomena through network-analytical approaches (see http://www.usma.edu/nsc/SitePages/About.aspx. Security services, police forces and of course intelligence services also use network analysis.

Some network concepts
Link maxima. There is quite a bit of maths in network theory, but some levels are easy to understand. Consider, for example, the relationship between the number of nodes, and the number of connections possible between them.

My explanation: suppose I have two friends: in network terms we are three nodes (ignoring our other friends for the sake of argument!). I’m friends with Jim, also Anna, but they don’t yet know each other. So Jim has one connection within the network, Anna has one, and I have two. I introduce Jim to Anna; now each of us has two connections, and the maximum number of possible connections between three nodes (three connections) has been reached.

Try drawing a series of simple circle-and-line diagrams, and count the connections possible. With four nodes, each can have up to three connections and the maximum number of connections is six. In a network of six nodes, each can form up to five connections; the total possible number of connections is 15.

There is a general formula; where n = the number of nodes, the maximum number of connections is n times (n minus one), and the total divided by two. With ten nodes, the maximum number of connections is 45. Double the size of the network to 20 nodes, and now 190 links are possible.

Of course, in a real live network, not everyone is connected directly to everyone else; in any case we just wouldn’t have the cognitive ability to maintain so many links. In networks which Drew and David have mapped, the most links any one node directly makes is about 15. But everyone is connected to everyone indirectly through intermediate nodes in the network.

Centrality. Network theory identifies several forms of ‘centrality’, which broadly stated is a measure of which are the most important nodes in a network system. Today, said Drew, we would look at closeness centrality and betweenness centrality.

As I understand it, the most basic kind of centrality measure is ‘degree centrality’, which simply means the number of links each node has. A person with links to 2 others in the network has less degree centrality than someone with 10. But this can be complicated if the links have some directionality. Consider, on Facebook or Twitter someone may have two million incoming links (‘likes’ or ‘follows’) and so is popular, but has few outgoing links and so is not particularly gregarious.

More complex centrality indices use the idea of the ‘length’ of a path between nodes. This is potentially confusing, because the spread of nodes on a network diagram is bound to mean that some connecting lines appear longer than others, but this is not what is meant. Length here is measured as the number of hops it takes to get from one node to another. If A is linked directly to D, and D directly to M, and M directly to Y, and that is the only way to get from A to Y, then the length of the path from A to Y is three hops.

A simple definition of closeness centrality is centrality to the network as a whole. Nodes which have a high closeness score are best placed to spread information across a network, and they also have a good overview of what is happening across the network.

Suppose you have 26 nodes in a network labelled A to Z and you want to calculate the closeness centrality of node M, add up the number of hops it takes to get from M to A, from M to B, from M to C and so on. The sum of all those lengths for M, divided by the total number of nodes, has been called its index of ‘farness’, and its index of ‘closeness’ is simply the inverse of this.

Betweenness centrality notes that some individual nodes are central to different bits of the network: this is common in networks made up of people. We can identify clusters of nodes that hang together, versus clusters weakly linked to the others. You might do this to identify who to lobby, to whom to feed information, to have the most effect on the network. Nodes with high betweenness centrality act as important bridges within the network, but may also be potential single points of failure.

Betweenness centrality is more difficult to compute. Repeating the above example, we would ask how often does node M act as a ‘stepping stone’ on the path between any two other nodes? This concept was introduced by Linton Freeman in 1977 to help identify, in human networks, who in a network has the most influence or control on communication between other people.

Eigenvector centrality measures how well connected a node is to other well-connected nodes, and such nodes generally play a leadership role within the network.

As I understand it, this is a kind of ‘metameasure’ based on already computed centrality indices for the nodes. Connecting to a node with a high closeness or betweenness centrality (a well connected and influential node) counts for more than connecting to one with a low score. You raise your eigenvector centrality score by connecting to as many well-connected people as you can.

Network density. There are various definitions of this metric. Drew thinks the most useful one is, the average number of connections per node within the network. This works for any network size.

Our imagined ‘A to Z’ network has a theoretical maximum of 325 connections, and if they were all active, each node would have 25 links and we could call that situation ‘100% density’. But polling the network, we may find that A actually has 5 connections, B has 7, C has 3, D has 11 and so on.

Clusters and communities. Network analysis software can identify clusters of nodes which tend to hang together. This is not because they share a common characteristic, but because of the place they occupy in the network. The software can then auto-colour those nodes in groups to help you to notice them. Usually these network clusters turn out to have a basis in the nature of real functional links within the community.

One method for detecting hierarchical sets of communities and sub-communities in large networks was developed at the Catholic University of Louvain in Belgium and is called the Louvain Method. It’s available as C++ or Matlab code and is used in social network analysis tools such as NetworkX and Gephi. (See a pretty thorough if dense explanation on Quora at https://www.quora.com/Is-there-a-simple-explanation-of-the-Louvain-Method-of-community-detection

Another approach to cluster detection is able to notice clusters that overlap, and that it would seem is what the Kumu web-based network analysis tool uses.

Types and uses of networks
David now took over the meeting. As a journalist he had noticed how in a community affected by some proposed urban development project, a ‘helicopter view’ might reveal disconnected initiatives across the community; how to join them up, how to overcome the silo mentalities which crystallise around different professions and cliques? Thus he became interested in network thinking.

David showed us a couple of diagrammatic slides originated by Harold Jarche. One, labelled ‘the network learning model’, creates a space between two axes. The vertical axis indicates ‘diversity’ and ranges from ‘structured and hierarchical’ at the bottom to ‘informal and networked’ at the top. The other axis has ‘goal-oriented and collaborative’ at the left and ‘opportunity-driven and cooperative’ at the right. Ranged up the diagram from bottom-left to top-right are three slightly overlapped balloons representing three levels of networks for sharing and learning:

  • Work Teams (structured, goal oriented): based inside a formal organisational structure, sharing complex knowledge, driven by deadlines, strong social ties, co-creating learning.
  • Communities of Practice (half-way along both axes): spanning shared concerns across organisations, a trusted space to test ideas, people don’t know each other personally, but integrating work and learning.
  • Social Networks (informally co-operative, opportunity-driven): high diversity of ideas and opinions such that you might find stuff you hadn’t considered in your task group; weak social ties.

Visualisation and analysis software
I have already mentioned the use of specialised software to help represent networks and to analyse them. After the exercise and a break, Drew returned to this topic. Most network analysis software, he said, has an analytical and heavily mathematical flavour: examples are UciNet and Gephi. But recently, easier-to-use software has appeared and he described three that he and David have used.

  • yEd is a free , open-source diagramming package for Linux, Mac and Windows. I have used this myself, but for drawing a particular kind of non-social network diagram: Entity-Relationship Diagrams (ERD) used in database design. According to Drew, yEd also has some ability to analyse network maps.
  • Kumu is their current favourite and main recommendation. It is a web-based system, and you can sign up for a basic free account at kumu.io. You can draw network diagrams with Kumu or it will make them from data and do the analysis; it can also hold stacks of attribute information attached to the nodes and the connections, which enables clever searches and filters on the diagram. Drew and David have been combining network analysis with Asset-Based Community Development methodology (of which, more later), and being able to annotate the notes with what assets they bring to the table has been very useful.
  • Polinode. Drew described this as ‘a very slick program’, also web-based, and business oriented. It has a built-in survey mechanism which is useful for collecting information about people in your business network and automatically populating the network diagram accordingly.

Example network maps

Readers may want to look at the PDF file of the slides to see the examples described here. The slides can be found online by clicking here unless you are reading this off paper, in which case the URL is: https://docs.google.com/presentation/d/1peVwMMIMvlzsfz9AWgA5rtVm_JNMzCyl3 7HUxJTYr0o/edit?ts=5784d539#slide=id.g115d229400_2_10

The first example was created through a survey conducted for the Irish Crafts Council, polling designers and makers, suppliers, retailers and agencies in Ballyhoura, South Tipperary, Wexford, Kilkenny and West Cork. It presents as quite a dense diagram with over 400 nodes and an overlapping mesh of connection lines which in places all run into each other so it is hard to distinguish them. The software discovered three major clusters, based on the link patterns. Interestingly, the clusters were based strongly on geographical proximity – it wasn’t the case that jewellers would network with other like craftspeople across the region, for example, but across the crafts, people networked locally and helped each other out.

The study also revealed that economic development agencies had lots of connections; in West Cork in particular, the agency played a leading role in the network. Meanwhile, though the Wexford and Kilkenny cluster showed a very dense pattern of connections, they were mostly connections within cliques of craft workers, and as such were not very influential across the area.

A second example was for a regeneration partnership programme for Berwick upon Tweed; in this diagram, all 50 or so nodes were organisations. Seven, highlighted on the diagram by the software, were major ‘hubs’ with multiple linkages, with the Borough Council as the most central, playing a ‘brokering’ role between the more strategic organisations at the top of the diagram, and the tightly focused local organisations at the bottom.

Within this project, they then compared the network graph with the results of a survey in which each organisation within the network was asked to rate their perception of (a) the skills held by the other organisations, across five categories and (b) resources those organisations also had to offer, across the same five categories. Dramatically, the Borough Council which the network analysis had identified as being ‘most central’ scores spectacularly the worst on both counts! This leads to interesting discussions. So do you pump money and training and resources into the Council as the centre of that network, or bypass them with a new project? (What actually happened was that all Borough Councils in Northumberland were disbanded.)

Kumu again, in detail
Drew was keen to point out that whereas in the past different software tools would have been needed to work on the different phases of the Berwick upon Tweed project, they were able to do it all simultaneously in Kumu. If anyone is interesting in pursuing this, after this session, he suggests that we get a free Kumu account. That will give each of us a Kumu ‘name’, and he suggested he could put up a Kumu site where we could discuss this stuff and experiment.

There are various ways of getting network data into Kumu. You can draw directly to the screen; Drew likes drawing so he appreciates this. Or, you can type commands into a screen terminal. Comma-separated database files (.csv) can also be uploaded. Kumu can ingest spreadsheet files from Google Sheets, and these in turn can be fed from Google Forms, Google’s form-interface web-based input software. Drew also noted that Kumu is planning to introduce its own integrated survey module soon.

Kumu, as already explained, lets you add extra data to nodes, add node attributes, and tag nodes, which makes search and filtering more powerful. Drew also believes that the developers are very responsive and they listen to how people want it to develop.

More examples, fictitious and applied
Drew showed a network map for ‘Slipham’ – a fictitious community which they use for testing ideas and policies. It is populated by the kinds of local people, organisations and services which would be typical for most communities: there’s the General Hospital and a group GP practice, a branch of Age UK, a number of local councillors, a Somali Association, the Rotary Club, the Police, several sports clubs, etc… and a number of individuals who provide bridging functions through their multiple engagements.

The ‘centrality’ measures for the nodes are emphasised on the map by having the more central, better connected nodes displayed as a proportionately larger circle. The circles are coloured – automatically by the software, on the basis of attribute data that has

been entered for each node. Nodes around education are coloured yellow, red signifies health and social care, and blue is socio-political. (How Kumu displays nodes and links can be customised by bits of Cascading Style Sheet coding, as used in Web site design.)

Drew switched to the Slipham map in Kumu itself, online, and demonstrated how each node can have attributes stored ‘within’ it. He also showed some of the ways that a map can be probed, for example by clicking on one node and having the map display only those other nodes directly linked to it as working contacts – or perhaps within two ‘hops’ rather than one. Selecting two nodes at opposite sides of the map, he got Kumu to show the immediate links of each, helping to identify a couple of nodes shared between them, which could be used as conduits for contact or liaison.

Drew demonstrated a network map produced for NHS Education Scotland (NES). The connector lines on here were interesting in three ways. Firstly, they displayed as curved rather than straight; they displayed with three grades of thickness; and they also seemed to indicate directionality, as each line had an arrowhead at just one end.

The purpose of this investigation was to identify sources and flows of information. The thickness of the line indicates the ‘volume’ of flow (an attribute which you can control by adding a value to the data behind the connection), and although Drew did not explain this, the arrowheads clearly indicate the direction of information flow.

Drew used this diagram to warn of an effect in mapping in real life, which is that one often works for a single client within the network (in this case, the NES), who readily provide their own links, and those initially dominate the map. If you want a more thorough picture, you will have to make contact with the organisations they have identified, and survey them to try to ascertain their links too. It may take a few iterations of this process to get the wider picture.

A second exercise (or game)
We had already had a simple discussion exercise about networks in our table groups. Drew now offered something more playful. He used Kumu to present an abstract network map where the nodes were identified by numeral only. Displayed next to this map on the left was a listing of the ‘top ten’ nodes ranked by betweenness centrality. Each table was to ‘adopt’ one node and then try to promote it up the centrality score table, either by adding a link, or deleting one. (The ‘adopted’ node did not have to be a terminus for the link added or deleted.) Based on table choices, Drew input the changes into the Kumu map and re-displayed the rankings. We did this a couple of times.

The game was competitive and fun, and less confusing than it might have been, because three tables adopted the same node, and two another. The biggest effects came from making or breaking links between nodes which were already well connected. This was a good exercise in learning how to ‘read’ a network diagram.

Collecting information for mapping
After the exercise and a coffee break, Drew gave us guidance about how to prepare for network mapping by gathering information. Where a community is dispersed or hard to collect together, you might prepare an online survey; they’d used Google Form, but Survey Monkey also works. The data may have to be fed in via Excel or Google Sheets, and as Comma Separated Values (.csv). London Voluntary Service Council is currently doing a network mapping exercise using online forms.

If you are holding an event where participants are present, you could get people to input data straight into Kumu, or create a drawn-up paper sheet or questionnaire. Drew showed a model: an Organisational Mapping Sheet they had prepared to collect data for a project on tobacco reduction. Each organisation notes their name at the top of the sheet, and adds some ‘interest keywords’ – I guess this is used to sift the nodes into categories, and so would work best with a predetermined tag vocabulary.

The sample illustrated then had a number of small repeated tables, the first of which was for ‘your organisation’. One box asked ‘Sharing?’ – if you think your organisation is good at sharing, you tick it; if not, you leave it blank.

The sample form then listed five rows of activity: Online communications, Technical, Management, Financial and Community, and next to each of this was a box for ‘Skills’ and another for ‘Resources’. If you have technical skills, you tick that box, and if you have financial resources, you tick that. Otherwise, you leave them blank.

The sample sheet shown had seven other mini-tables identical to the first, except that rather than being about ‘your organisation’ this was for your private opinion about the sharing abilities, skills and resources about the other organisations with which yours was most in contact. There was a note to assure people that ‘individual contributions will be confidential and unattributable’.

This is just one example. Depending on the theme and the nature of the community, the skill and resources sets may be different. Instead of a tick or the absence of one, you could calibrate the data with a numerical score, for example a plus or minus figure, or a number between 1 and 5, or a number of ticks. A calibrated assessment seems to have been used in the Berwick upon Tweed case study described earlier.

Other supplementary means for collecting data could be face to face or telephone interviews. If you are ‘iterating’ the investigation by contacting other organisations named in the linking, telephone interviews make a lot of sense unless you have an online form resource and can invite the second-round participants to fill that out too.

Not that difficult!
Drew showed one example of a fairly complex network map encompassing about 70 organisations, which had been created by the manager of the a Children’s Centre in Croydon to indicate those involved in some way in Croydon’s ‘Best Start’ programme, for children under 5 in families at risk in some way. Following a workshop, she went home, created a Kumu account and without previous experience of network mapping created the network map in two hours.

Another advantage of developing a network map in an online environment like Kumu is that Drew was able, as it were, to ‘look over her shoulder’ and help her remotely to develop her network map further.

One problem with a network map created thus by an individual is that, although she thinks those links exist, she doesn’t know for sure, and the links are unqualified in other ways. A maxim in the network mapping community is ‘the node knows’ – best not to speculate but ask people and organisations in a prospective network what their connections really are.

Time-base networks: the CFAB example
Networks can have a time dimension, and Kumu can cope with these too. An example might be a flow-chart, or a process-mapping chart.

Family Maps. As mentioned briefly above, a recent project which Drew and David have tackled is for the Centre for Ageing Better (CFAB), which wanted to investigate how technology can be used to assist people in later life. They started by developing six ‘personas’. A persona is a fictitious person who embodies a set of characteristics typically found together, so can represent a sector of the population in a simulation game such as one that CFAB ran with 50 people at one of their conferences.

These personas were based on research conducted by the polling company IpsosMORI, who have a database of population characteristics from polling, plus focus groups. IpsosMORI had already concluded that three factors dominate in securing well being in later life: financial security, health, and social connections.

Based on IpsosMORI cluster work, the CFAB project created ‘Mary’, whose tagline was ‘Can Do and Connected’. The character was represented by a cartoon portrait by Drew, in which she says ‘I want to remain independent as long as possible!’ Five sentences explain that she is 73, owns her home outright, but feels she has to watch her spending. She recently lost her husband, but stays positive with support from friends and family, and engages in local activities. She has long-term health issues, but hopes things will improve and stays optimistic. She uses an old mobile phone for calls and texts, but her attitude indicates she would explore technology further if she thought it would help…

Mary lives in ‘Slipham’ (of course) and has connections with various agencies there such as the General Hospital, a community nurse and a bowling club, plus several individuals who are friends, children and grandchildren, etc.

For this project (perhaps through the gaming process?) they also developed time-based maps which showed how Mary’s network might evolve over time: she compensated for the loss of her husband by joining community social activities such as ‘PowerAgers’ (a walking group), but later had to give them up due to advancing ill health, which also changed her needs and her network of support from health and social care agencies. For the network mapping in Kumu, this evolution of her network was coded by tagging each node in it for inclusion in various year bands. You can then advance year by year (in this case, in five-year steps) to see how the persona’s network develops.

The purpose of the exercise was to explore how support services might be better co­ordinated to help people as they get older – and the role which technology might play in that. This was allied to investigation of how vulnerable people are to social isolation.

Drew spoke of the phenomenon of ‘social ageing’ – how our social connections change as a result of ageing. A related concept is ‘network risk’, which spots which kinds of contact network are vulnerable to sudden collapse. They tend to be the ones dependent on physical activity – but could also be impacted by poor public transport provision, or financial hardship, meaning that you can no longer afford to participate in activities.

Multi-level maps. Drew showed how for the CFAB exercise they created a custom version of ‘Slipham’. So long as the node entities reside in the same Kumu project, their links and other attributes will be inherited by other maps created within the project. Drew pointed out that a couple of the nodes on Mary’s personal map are also on the wider Slipham map – others, which might be relevant to Mary’s future happiness and well-being (such as Age UK Slipshire and the University of the Third Age), were not.

Linking to Asset Based Community Development projects

Drew’s final slide showed a very complex network map developed around several projects in Croydon, on which he and David are currently working.

I was interested to note that in the Croydon work, the network mapping is part of a larger programme using ABCD methodology – Asset Based Community Development. My awareness of ABCD has come from another community development practitioner, Ron Donaldson – who spoke at the NetIKX#78 event – and who uses ABCD in some of his own work.

Asset Based Community Development is an approach to developing activities and services within communities which focuses not so much on community needs as on the skills, resources and capabilities of individuals and groupings and organisations within a community. The approach was developed in the 1990 by John L McKnight and John P Kretzmann at the Institute for Policy Research at Northwestern University in Illinois, USA. The website of the ABCD Institute which they founded, anchored within the university’s Center for Civic Engagement, is at http://www.abcdinstitute.org/

For example, we may find out that the Methodist Church has a meeting hall, the school has a grassy area suitable for a neighbourhood fair, Darren is a whiz at Web sites, Ant is a cartoonist, Charmaine and Sue make Jamaican patties, Nguyen is a videographer and videomaker with his own kit, three of the gents from Men In Sheds would like to teach basic woodwork, Sarah has a pillar drill and lathe and can weld, Pushpinder creates theatrical costumes, three Green Party activists want to encourage materials recycling, Jordan can drive a minibus… If these assets can be put together in inventive ways, the community can start to help itself rather than waiting on help from on high.

A key tool in ABCD is the Capacity Inventory which gathers data about who has or can do what, and also finds out how they are connected to projects in the community. To me it has now become obvious that rather than a static card index or other capacity inventory database, an interactive network map with data behind it such as Drew had shown us using Kumu fits beautifully with Asset Based Community Development.

Audience feedback
Many people likes the example of ‘Mary’. Steve Dale said that it is not uncommon for our networks to shrink as we get older. Rob Rosset wondered if that is something we accept, or struggle against. Steve felt that maybe the ageing brain is not as able to cope with lots of connections, but thinking about people he knew years ago, such as in his Navy career – well, their paths have diverged from his anyway, and he would rather stay close to a smaller circle of family and friends who mean a lot to him.

Someone else affected by the story of ‘Mary’ thought exclusion and isolation are the other side of networking. She added that many people are not confident and outgoing networkers, so as well as thinking about how to strengthen our networks by building the links between those who link readily, we should also think about those who stand on the sidelines only for lack of encouragement.

Conrad reflected on the game we had played. Some people with try to strengthen their local dominance in a network, whereas if they were less egotistical and were prepared to connect on an equal basis with people at the heart of other clusters, more could be achieved. Drew commented that a network map can identify someone who occupies a strong network position – but that doesn’t guarantee the right constructive attitude!

David Wilcox reported that a concern of the London Voluntary Service Council is that in the current austerity climate, voluntary action is being crippled as the agencies and associations which used to serve as hubs are taken out. How can the existing groups become better at using network thinking and technology? But those organisations rarely have those skills and capabilities.

David has begun thinking it would be good to develop a range of personas which might represent Londoners – as a starting point for examining what kinds of connections they tend to have, and what they might benefit from in the future – either on their own, or assisted by ‘Network Builders’. Because it looks as if increasingly we are going to have to create social infrastructure from the bottom up. He’d be interested to know if anybody else would be interested in that, to get a project going.

Clare Parry thought that people may share a neighbourhood, but the communities don’t connect – the example she offered was of traveller communities not connecting to settled ones (different ethnic and cultural communities would be another case in point). David said that this was a feature of the ABCD work in Croydon which has been going for about four years. They have volunteer ‘community connectors’ and Drew has been using network mapping to help them identify useful points of inter-community connection.

Finally, Martin expressed concerns about the ability of network maps to misrepresent situations if the data input is wrong or insufficient. Drew said that network maps can give you insights – but if you really want to know what’s going on, you have to investigate that in the field.

SOME FINAL THOUGHTS
This was an interesting and well-attended NetIKX meeting. It’s nice when we have use of the Upper Mews Room at the British Dental Association – it accommodates 50 comfortably around tables and is well lit, very suitable for the round-table syndicate groups which are a hallmark of NetIKX meetings. (To learn more about NetIKX, see https://www.netikx.org.)

As one of the early slides said, there are various ways of getting a picture of how things and people are organised, such as through stories. Network analysis is a more structural and structured method. But I think more people are comfortable with stories and I suspect some of my NetIKX colleagues felt they had waded beyond their depth when we tackled network theory! This may be why the story of ‘Mary’ resonated so well – however fictitious, there was a story in there. The stories around other projects such as the Berwick upon Tweed one also helped bring these to life for me.

I was intrigued enough by betweenness metrics and other abstract aspects of network theory to do more reading around them, if only to help explain them. I hope my expanded explanations of how these things work with reference to my fictitious abstract ‘A to Z’ network are (a) helpful and (b) not misleading!

Obviously there are subtleties of social network analysis and visualisation which we didn’t cover (and which could have led to rapid cognitive overload has it been attempted). For example: the directionality of links; the strength of links; how if at all to weight the value of a particular node’s contribution to the network. I look forward to playing with Kumu to discover more and I have signed up, as suggested by Drew.

On the CFAB example and social ageing —The story of Mary made me think too. Many people of my age (early 60s) and even decades older find that the Internet and social media, even Facebook plus digital photography, help us keep in contact with friends dispersed across the globe, people whom we meet face to face but rarely, and even make new friends by being introduced online to friends of friends. Quite cost effectively, and even if we are housebound.

Writing and reading – perhaps falling out of fashion? – can network us with others in great depth. Frequent emails are important to my mother and me. Text can link us to ideas across time as well as space, for example by reading books – or accounts like this of interesting meetings we may have missed…

In Mary’s story of ageing, illness decreased her access to wider networks, but that is not the only factor. There are many activities I cannot join for lack of funds. I cannot begin to express how grateful I am to have a 60+ Oystercard and therefore free travel across London!

 

 

Blog for March 2016 Seminar: Storytelling For Problem Solving & Better Decision Making

Conrad Taylor writes:

On 22 March 2016, Ron Donaldson came to speak on the topic ‘Storytelling for Problem Solving and Better Decision Making’. This attracted nearly forty people, a larger than usual NetIKX attendance.

The focus of Ron’s work is helping organisations and groups of people to solve problems and improve understanding. He is eclectic in the workshop exercise methods he uses, drawing on Cognitive Edge methods, Participatory Narrative Inquiry (https://narrafirma.com/home/participatory-narrative-inquiry/) methods, and also the ‘TRIZ’ methods (www.triz.co.uk) and models for inventive problem-solving developed in the Soviet Union by Genrich Altshuler.

Ron describes himself as a ‘knowledge ecologist’. He has a degree in Ecology and Geology and a professional interest in ecological thinking and nature conservation, having worked for 21 years at English Nature, first on systems analysis and process modelling, then on knowledge management.

In around 1998, a workshop was run at English Nature by Dave Snowden, later the founder and Chief Scientific Officer of Cognitive Edge, but then a director in the IBM Institute for Knowledge Management. Snowden was then developing a framework for understanding complexity in organisational situations and a set of working methods for engaging people in problem solving. Exposure to these ideas and methods turned Ron’s interest towards the power of storytelling and knowledge management. Ten years later this interest pulled him away from English Nature into self-employment.

Ron explained that he has difficulty with the term ‘knowledge management’ – does ‘knowledge’ mean everything an organisation knows? Is it what’s left after you have pigeonholed some stuff as data and some as information? If knowledge is the stuff that is in people’s heads, as many would say, can it be managed? This is part of what turned him towards describing himself as a ‘knowledge ecologist’ instead: because one can at least aspire to manage the conditions/environment and community practices within which people know and learn things, and share what they know. Also, because ecology de-emphasises the individual and focuses on systems and interaction, it tends to subvert ‘business as usual’ in search of better and more communitarian ways of doing things: dampening ‘ego’ and amplifying ‘eco’.

Since 2008 Ron has been working freelance. In the last three years this has taken him into a series of local engagements, which he used to illustrate to the meeting the power of storytelling in solving problems and making better informed decisions. He had chosen examples from work around environmental issues, work with public services, and work with health.

Ron then went on to explain his various methods, including storytelling, small-group discussion (with half of each group moving on after a fixed time – rather like David Gurteen’s Knowledge Cafés) and techniques such as ‘Future Backwards’, which Ron later used as an exercise for the NetIKX group (see below).

Ron emphasised that he felt that he simply guides the process, facilitating without directly engaging with the subject matter. In fact, Ron has made this something of a guiding principle for himself: not to engage much with the content, simply make sure that people are participating, create the starting conditions, context and activities to support that, and reduce the opportunity for individuals to take over the conversation.

In a project that involved getting data shared between different local firefighting forces (even the hoses of one force would not couple with those of another), Ron suggested that they organise a workshop and invite people from all the local forces plus anyone connected with data and information externally, whether they collected it, processed it or used it. In this case the very fact that people were talking led to positive developments, both in practice and in the development of a ‘Knowledge Network’ across the fire services. Here Ron used an exercise called the Anecdote Circle, which has its origins with Shawn Callahan and colleagues in the Anecdote consultancy (http://www.anecdote.com/) in Australia. The Anecdote consultancy’s own guide to how to run an anecdote circle is at http://www.anecdote.com/pdfs/papers/Ultimate_Guide_to_ACs_v1.0.pdf. However, Ron went on to describe how he implements this approach.

Ron then gave another example. Steve Dale has been working with a project called the Better Policing Collaborative, which unites five universities and five police forces in a search for priorities in innovation in policing, which should lead to lower crime rates and a safer community. Steve and Ron worked together to facilitate a workshop at Birmingham University, getting police to tell their stories. Again, this was an application of Ron’s approach to the Anecdote Circles method.

One of the stories told concerned a man who had been arrested for shoplifting, somewhere in the West Midlands. It was his fourth offence, and this time he was going to be prosecuted. What social services knew (but the police didn’t) was that all the people in this person’s household had poor health. The Housing Association (HA – and they alone) knew that all the houses in that area were suffering badly from damp. What the hospital knew (but not the HA, nor the social service, nor the police) was that they were beginning to be inundated with admissions for major breathing difficulties and asthma. These connections had come to light only as the result of informal conversations between members of these groups, when they happened to be together at a conference. The way the story ended was that money was found from a health budget to pay the housing association to sort out the problems of damp; and it is hoped that as health improves, so will financial well-being, with a concomitant improvement in the crime statistics.

What Ron took away from that was that although the purpose of the exercise was to share stories between police, the story cast light on the advantages to society if stories could be shared between different agencies and departments.

Finally, Ron discussed some training courses run for a group of West Midlands nursing staff with responsibility for knowledge management.

One of the major health problems in Coventry, contributing to the pressure on services, is Chronic Obstructive Pulmonary Disorder (COPD), including emphysema and chronic bronchitis. Ron suggested that they should invite anyone engaging with COPD in the Coventry area to join a meeting using storytelling workshop methods. There has now been a series of such workshops, involving NHS staff, the various lung charities, staff from Coventry University, a chaplain who was involved with terminally ill sufferers at the hospital, and people suffering from COPD, including two women patients who had met in the hospital waiting room and were now supporting each other, as ‘buddies’, by sharing what they know.

Ron described what happens as the result of sharing stories as ‘mapping the narrative landscape’ for the subject you are dealing with. So, the participants at the workshop were asked to come up with ideas, and then cluster around the ideas that appealed to them the most.

What these COPD-focused workshops identified was that, as well as the various hospital-based and home visit services, it would also help to organise social events that people with COPD could attend and be made aware of knowledge available from the experts, who would also be there. So the meetings have been happening, on Monday afternoons in Coventry – people talking together, and playing Bingo, as well as talking to the specialists and the charities on a general or one-to-one basis.

Ron followed this observation with some stories about how COPD patients have been benefitting from the drop-in sessions, and how much they valued them.

The Coventry COPD drop-in project, known as RIPPLE (standing for ‘Respiratory Innovation Promoting a Positive Life’), has now been picked up by the innovation fund NESTA and mentioned in their recent report ‘At the Heart of Health – Realising the value of people and communities’. They cite RIPPLE as a great example of Asset-Based Community Development (ABCD), which is an approach that encourages people to discover their own assets and abilities and build what they want on that basis, rather than relying on the provision of services.

There is more at Ron’s Web site about the RIPPLE project (including a video) and NESTA’s reaction to it, here: https://rondon.wordpress.com/

Now the West Midlands has got the go-ahead to fund another six similar RIPPLE-based community projects, as well as the pilot for a similar initiative around diabetes.

Before the tea break, Ron briefed the meeting about the form of ‘Participatory Narrative Inquiry’ exercise that those attending were about to do, to gain some experience in table groups of a type of exercise evolved by the Cognitive Edge network, called ‘Future Backwards’. This is the same exercise that the fire service groups had undertaken. NetIKX members (and those who attended the meeting) can find out more about this in the fuller report on the NetIKX members’ website (www.netikx.org)

Ron brought the exercise to an end with about fifteen minutes to go, so that he could add some further information. He described how, in collaboration with Cynthia Kurtz, he has set up PNI2, the Participatory Narrative Inquiry Institute, as a membership organisation for people who use these methods (http://pni2.org).

Ron ended the afternoon by explaining more about the way he applies the various exercises and how he decides which technique to use in which circumstances. He emphasised, however, the importance of talking. Churchill’s comment that ‘To jaw-jaw is always better than to war-war’ seems to apply just as well to less dramatic situations than war!

Ron added that he always welcomes further conversations around these topics and would be grateful for referrals to any communities that might benefit from a similar approach, or gatherings wanting to hear some heart-warming stories. His contact details are:

Ron Donaldson, freelance knowledge ecologist

email:    
mobile:   07833 454211
twitter:   @rondon
website: https://rondon.wordpress.com/