Blog for the September 2020 Seminar: TRIZ

TRIZ (a Russian acronym for a phrase usually translated as ‘the theory of inventive problem-solving’) is not a well-known technique in knowledge and information management circles. It is the brainchild of Genrich Altshuller, an engineer, scientist, inventor and writer – who, incidentally, paid the price for his innovative thinking style by displeasing Stalin and consequently being sent to a labour camp. However, he used his experiences there to further refine his problem-solving techniques!

TRIZ is still most widely used in the engineering field, but the TRIZ principles are applicable to any kind of problems, not just technical ones.

Ron Donaldson, NetIKX committee member and TRIZ expert at Oxford Creativity, took us through the fundamentals of TRIZ in an intensive yet enjoyable seminar, enhanced by the wonderful cartoons of Clive Goddard. The TRIZ approach is based on the principle of analogous thinking – often we limit ourselves to the solutions found within our own area of expertise, whereas in fact we could apply solutions from other domains where similar problems have been faced. The advantage of this approach is that you learn to think conceptually and to view a problem in an abstract way, rather than becoming bogged down in detail.

But, given that most of us lack this breadth of knowledge, how do we access these creative solutions? Altshuller analysed 50,000 patent abstracts to identify how the innovation had taken place. From this he developed the essential TRIZ methodology: the concept of technical contradictions, the concept of ideality of a system, contradiction matrix and the 40 principles of invention. He also modelled creative thinking tools and techniques from observing creative people at work and uncovering patterns in their thinking.

At the heart of all problems requiring an inventive solution, there is a contradiction: for example, we want something that is both strong and lightweight, but how do we increase strength without also increasing weight? The existence of a contradiction does not mean you cannot solve a problem: Ron suggested that we need to ‘channel our inner Spice Girl’ and state what we ‘really, really want’ as there is usually a way of getting it without having to change anything!

Altshuller’s research identified three characteristics of creative people: they think without constraints; they think in time and scale, and they get everything they want. When you have identified your ideal outcome, you can work ‘backwards towards reality’.

One of the TRIZ tools is the contradictions matrix, which allows you to map the contradictions inherent in your problem and to identify inventive principles to solve them. We saw examples of the principles and how they can be used in different contexts: for example, principle 13 (The Other Way Round) could involve turning an object upside-down, or making the fixed parts moveable and the moving parts fixed. TRIZ also emphasises the importance of using the resources that you have, which supports sustainability and reuse.

Ron set us two questions to consider in the breakout sessions (which luckily, we were able to replicate effectively via Zoom!): how would you use TRIZ within knowledge management? and which bits of the session really inspired you? This led to a discussion ranging across the design of tin-openers, Altshuller’s science fiction stories and the challenges of applying inventive solutions in the public sector. It is safe to say that we were all intrigued by what we had learned and keen to explore further.

TRIZ is open source and is not copyrighted – so you can try out the toolkit for yourself. The contradictions matrix, the 40 principles and other tools are free to download from the Oxford Creativity site, where you can also sign up for free webinars on TRIZ. Give it a go and unleash your genius!

By Carlin Parry

Carlin’s LinkedIn web address is :


Blog for July 2020 Seminar: A Library during lockdown

Antony Groves has been working at the University of Sussex for 15 years starting in a ‘front line’ role and continuing on into his current job where he is always talking to and supporting a lot of students at both undergraduate and post-graduate levels. He is a member of CILIP and blogs for the Multi Media and Information Technology Group. Antony is a reflective practitioner and believes in making things happen. As of now there are two major priorities – proactively working towards making the UoS website accessible by the government deadline of September 23rd 2020 and reactively working to make the UoS website and services as useful as possible following the Covid19# lockdown in March.

Two key ideas – accessibility and usability. Accessibility can be straightforward things such as font size, change in colour and ensuring that the keyboard is operable. For more on accessibility

‘Strategic approaches to implementing accessibility’, more colloquially – ‘The Kent strategy slides’. 2019 saw over a million visits to the library website, 6,170 on the busiest day – Tuesday May 14th. There has been a shift (a pivot) from physical visits to digital space. The main focus is on the user.
At this time there is a rush to open things up after lockdown without necessarily thinking about who is coming through the door and what they want now. Doing updating and coding makes you ‘removed’ from the user. Government Design Principles are a good place to start –

Now this is for everyone. You start with ‘user needs’ and you design with data. You build ‘digital services’ not websites. Remember that ‘A service is something that helps people to do something’. Iterate, then iterate again. We began by speaking to the academic community and gathering feedback. Over 100 pieces of feedback were collected and grouped into four main themes: architecture, behaviour, content and labelling. Top tasks were identified (e.g. searching for and finding books, booking rooms, accessing an account) –
People mainly make use of a handful of tasks so develop these first.

Architecture – “Confusing having two types of navigation”.

Behaviour – “Have never used library search tabs”.

Content – “More photos of the library and more infographics”.

Labelling – “Skills hub should have a description mentioning academic skills”.

Design with data – We benchmarked with other institutions.

We looked at Google analytics – most/least viewed pages, along with bounce and exit rates. We ran ‘card sorts’ to determine site structure. We created user stories to help edit pages. This resulted in (two examples) – the new ‘Making your research available’ section has very low bounce and exit rates, and these have also dropped across the whole site indicating that people are finding what they expect to. The ‘Find a book in the library page’ had 6,785 views compared with 1,182 in the 2018 Autumn term when it was located in the ‘Using the Library’ section.

Iteration goes on and on. There is still much to ‘unpack’ and ‘improve’. User testing is currently being organised. Usage is being analysed to see which parts of the website are seeing fewer views and less engagement. Working with teams inside and outside the UoS Library to make the digital services as useful as they can be to our community.

When Covid19# hit the UK we considered carefully how to respond. We devised a three pronged approach : Pivot / Add / Hide. ‘The Pivot’ involved moving the library from a physical presence into a digital space. For example, study rooms were no longer available and room bookings were changed into zoom bookings. ‘The Add’ meant introducing new services. There is a ‘click and study service’ starting this week whereby individuals can book a study place. There is a ’click and collect service’ and ‘Library FAQ’s’ appropriate for the period of lockdown. ‘The Hide’ concerned removing information on the website that was no longer appropriate such as ‘Information for visitors’ Instead, we created a guide to ‘Open Access Items’ and a ‘Schools Guide’.

All this work has been recognised by a ‘Customer Service Excellence’ award.

Antony is pleased that the work of the UoS Library Staff has been recognised but he takes it with a ‘pinch of salt’ as he is intent on doing more ‘user testing’ and receiving much more feedback as well as talking to his community.

In conclusion, notification of the inspirer behind this approach to digital services – “Revisiting Ranganathan : Applying the Five Laws of Library Science to Inclusive Web Design”. Ten changes we’ve made to the library website since lockdown –

Rob Rosset 25/07/2020




Blog for May 2020: Gurteen knowledge cafe

How do we thrive in a hyper-connected, complex world?

An afternoon of conversation with David Gurteen

There was a great start to this Zoom meeting. David Gurteen gave some simple guidance to participants so we could all Zoom smoothly.  It was great best practice demo. We are all becoming good at Zoom but simple guidance on how to set the visuals, and mute the sound is a wise precaution to make sure we are all competent with the medium. He also set out how the seminar would be scheduled, with breakout groups and plenaries. It was to be just like a NetIKX seminar in the BDA meeting room, even though it was totally different! I felt we were in very safe hands, as David was an eary adopter of Zoom, but still recognizes that new people will benefit by clarity of what works best. Well done David.

The introduction set the scene for the content of our café.  We were looking at how we live in a hyper-connected complex rapidly evolving world. David outlined many dimensions to this connectedness, including transport changes, internet, social media, global finances…

In his view; over the last 75 years this increased connectivity has led to massive complexity, and today we can conceive of two worlds – an old world before the Second World War and a new world that has emerged since 1945.  Not only are our technological systems complex, but we human beings are immensely complex, non-rational, emotional creatures full of cognitive biases. This socio-technical complexity together with our human complexity has resulted in a world that is highly volatile, unpredictable, confusing, and ambiguous. Compare the world now, with the locally focused world that dominated the pre-war years.

Furthermore, this complexity is accelerating as we enter the fourth industrial revolution in which disruptive technologies and trends such as the Internet of Things, robotics, virtual reality, and artificial intelligence are rapidly changing the way we live and work. Our 20th-century ways of thinking about the world and our old command and control, hierarchical ways of working no longer serve us well in this complex environment.

Is it true that if we wish to thrive, we need to learn to see the world in a new light, think about it differently, and discover better ways in which to interact and work together?

Break out groups

With practiced expertise, David set us up into small break-out groups that discussed the talk so far.  Did we agree, or feel continuity was a stronger thread than change. Then we swapped groups to take the conversation on further.


After the break-out groups, David looked at the two linked ideas behind Conversational Leadership.  He had some wonderful quotes about leadership.  Was the old control and lead model gone?  Do leaders have to hold a specific role, or can we all give leadership when the opportunity is there?  Of course, David provided examples of this, but perhaps after the seminar a very powerful example stands out – the 22 year old footballer changing the mind of a government with an 80 seat majority! You don’t need to have the expected ‘correct’ label to be a powerful leader.


We also looked at the other element: talking underpins how we work together. Using old TV clips and quotes, David urged us to consider how we communicate with each other, and if there is scope to change the world through talking?  Again, there was plenty of food for thought as we consider new ideas such as ‘unconscious bias’, ‘media bubbles’, ‘fake news’ and the global reach of social media.

We then broke into small groups again, to take the conversation further, using David’s talk as a stimulus.


At the end of the break-out groups, we re-joined as a mass of faces smiling out of the screen, ready to share our thoughts.   It is a wonderful thing, when you make a point to see heads nodding across the Zoom squares.  I recommend this to anyone who has not tried it!!!

Some themes emerged from the many small group chats.  One was the question of the fundamental nature of change.  Was our world so different when the humans within it remain very much the same?  We looked very briefly at what we think human nature is and whether it remains a constant despite the massively different technology we use on a daily basis.   Even if humans are the same fallible clay, the many practical ways we can now communicate gives us much more potential to hear and be heard.

We also considered the role of trust. In our workplaces, trust often seems to be in short supply, but it is a key to leaders taking on authority without becoming authoritarian. The emphasis on blame culture and short-term advangabe has to be countered with building genuine trust.

Is there potential for self-governing teams? The idea sounds inviting but would not ensure good leadership or sharing of ideas.  The loudest voice might still monopolise attention. And with some justification, as not everyone wants to be pro-active. Some prefer to follow as their choice, and others like to take part but balk at the tedium of talking through every minute decision!  This idea may have potential, but we agreed it would not be a panacea.

We did agree that roles and rules could be positive to help give shape to our working lives, but that they need not constrict our options to lead when the time comes.  And we can see the leadership role that our professional calling suggests.   With so many new information channels, so many closed groups and so many conflicting pressures, as information or knowledge professionals, we can take a leadership role in helping and supporting our chosen groups of very human work colleagues to understand and thrive in this complex and evolving world. Conversational Leadership should be one of the tools we take away to enable our work with colleagues.

Final Notes:

The NetIKX team.

NetIKX is a community of interest based around Knowledge and Information Professionals. We run 6 seminars each year and the focus is always on top quality speakers and the opportunity to network with peers. We are delighted that the Lockdown has not stopped our seminars taking place and expect to take Zoom with us when we leave lockdown! You can find details of how to join the lively NetIKX community on our Members page.

Our Facilitator

David Gurteen is a writer, speaker, and conversational facilitator. The focus of his work is Conversational Leadership – a style of working where we appreciate the power of conversation and take a conversational approach to the way that we connect, relate, learn and work with each other.  He is the creator of the Knowledge Café – a conversational process to bring a group of people together to learn from each other, build relationships and make a better sense of a rapidly changing, complex, less predictable world. He has facilitated hundreds of Knowledge Cafés and workshops in over 30 countries around the world over the past 20 years. He is also the founder of the Gurteen Knowledge Community – a global network of over 20,000 people in 160 countries.  Currently, he is writing an online book on Conversational Leadership. You can join a Knowledge Café if you consult his website.

Blog for January 2020: Keeping the show on the road in a virtual world

Topical News!

Virtual meetings are pretty much the only meetings in town!  With Corvid19 rampaging through the UK, we all need to get skilled at the art of virtual meetings as a top priority. Now is the time to show your value as a KM professional by providing skills and knowledge in the virtual meeting space.  Read on for help with this now!

Introduction – the age of disruption

This is a time when we all learn to live with digital disruption.   Processes and procedures that had lasted year upon year are suddenly subject to brand new ways of doing things.  One of these changes has been to meetings.  We no longer need to travel to go to meet someone.  We have the potential to have a virtual meeting, where wonderful technology means that geography does not stop us sharing live documents and possibly even admiring each other’s outfits!

It is interesting to see this become widespread as the health risks of face to face meetings grow all around us.  Using remote meetings are going to be a regular event for many of us.  Let’s put in the effort to do them well.

We can go to meetings with all the information we need in the palm of our hands, via laptops or smart phones, leaving all those cumbersome files and bundles of paper behind.  This opens a new world of opportunity for knowledge sharing and knowledge transfer.  These changes have many advantages but, as always, it pays to think carefully about the disadvantages too, so that we can take steps to reduce them.  We still need to build confidence that we all understand digital opportunities fully and are certain to get the best from them.

This article reports back from a NetIKX, (Network for Information and Knowledge Exchange), seminar where we spent a full afternoon grappling with the issues linked to Virtual Meetings.

Speaker: Paul Corney

Anyone who has sat through a meeting where many people are intently studying their mobiles, will know the frustrations that can cause.   And virtual meetings are notorious for the problems that technology can introduce.  Paul Corney, the President elect of CILIP, and an author and speaker of repute, took us to the heart of the issues for Knowledge Managers.  If meetings are one way that we share knowledge, it is essential that we, working as we do to ensure the best possible sharing takes place, will be in the forefront of establishing good practice.   As a lively and engaging speaker, Paul at once convinced his audience that he would be able to take us through the minefield of virtual meetings and help us take power over the essentials of good practice.

The potential range

We considered all the possibilities for a wide range of meetings.  Paul has a wealth of experience – he had been line managed by someone on the other side of the globe, 9,000 miles away – clearly virtual meetings would be required and when it is your boss on the line, you don’t want to have any distractions messing up communication. He also highlighted examples from the recently published KM Cookbook, including the International Olympic Committee whose Knowledge Management programme began in Sidney in 2000 and where significant amounts of knowledge is now organised and transferred.  This can allow learning to be disseminated to wider groups than ever possible before. It also highlights a variety of issues such as organising subgroups and breakouts.

An example

We did not just talk about virtual meetings – we invited one of our members, who could not attend because they were sick, to join us online.  True to form, in one way it was wonderful.  Conrad was able to talk to the crowd in the room from his sickbed.  (Please don’t worry, he has recovered now).  He spoke about his experience of virtual meetings, including bemoaning, in his memorable phrase: ‘survival of the loudest’.  But the technology only delivered half its promise! We could only hear Conrad, as the visuals refused to work.  That made the experience less rich, although as a demonstration that technology can let you down, it was very apposite.   If Conrad had been invited for a long speech, this could have been a disaster, as it is much harder for an audience to concentrate when there are no visual clues to keep their attention.  As it was, we only suffered from not learning what colour pyjamas Conrad wears!

Video mishaps

Paul took us through a short masterclass, aided by a stunning slide set, looking at the benefits and pitfalls; the good, the bad and the just plain awkward.  One of the resources he introduced to us was a short video clip (A Conference Call in Real Life) which portrayed a virtual meeting as if it was a traditional face to face meeting.  This had the impact of presenting what we know can go wrong but made hilarious when acted out.  For example: the times when people were talking but the sound had gone or the strange situation of ‘Pete has joined the meeting’ intoned several times as Pete’s link drops and he has to keep getting back up and running.  And of course, the ‘lurker’ who was in the meeting all the time but did not let anyone know he was there!  Believe me, that is very funny when you see it!  It certainly did highlight all the possible jinxes we can meet when we try virtual meetings.


As Knowledge Management advocates, we understand the importance of the media when messages are to be transmitted and it is vital that we don’t reduce our effectiveness in our ability to share when we embrace the most forward-looking technology.  The video clip was just one of the valuable resources we looked at during the meeting.  Since the seminar, NetIKX has collected a small set of resources that can be used to help understanding the issues, and they are available through our website.

Audience input

One great resource of a NexIKX meeting is that the attendees are all participants who contribute their own learning from experience.  As a result, we could pool our ideas about the different technologies we had used and stories and anecdotes from actual meetings we had survived.  One example that I loved was the dry comment about an internal team meeting with a home worker: ‘the meeting didn’t go well, but at least we all saw her sitting room!’  It brings back memories of the famous incident of the newsreader whose children toddled and crawled into view while he was broadcasting! It is a useful reminder for all video link meetings that you need to consider ensuring you have an appropriate background setting…


Paul provided us with a table outlining the pros and cons of different meeting software.   It was particularly helpful to get the facts, augmented by the experience of people in the room.  Of course, there are different ‘best choice’ options depending on the type of meetings you intend to support and the available resources. One well-resourced organisation uses Microsoft Teams, which will control social media use through that device, while others use Zoom, a simpler choice, or Webex, the more traditional option.  (This very useful table is available on the NetIKX website).  Once your software is chosen, you need to ensure that there are no problems with users having different software versions, or incompatible systems and remembering that simply because they have the software, this does not mean they know how to use it effectively!


 Of course, the best meetings have help and support from technology expertise; a strong reason for keeping good relations with our counterparts in the IT department!  Firewalls may have to be negotiated without leading to security risks.  It may be that in your eagerness to facilitate knowledge sharing you forget to consider the dangers of ‘leaking’.  There are many technical issues to negotiate to get the best possible solution to your virtual meeting needs.


And so, we come to the non-tech questions.  What differences do we have to manage with a virtual meeting compared to traditional meetings?  Do you need different rules? Will there be alternative ways to enforce them? Are there timing issues, or cultural issues and how do you get feedback to learn how well things worked and where you can improve?  One issue that we considered carefully was whether a good meeting chair would automatically be a good virtual meeting chair or if some different skills were needed.   A solution could be to have two chairs:  One to manage the meeting content and another to monitor and confirm Protocol.  This could solve all your problems – or possibly lead to utter confusion and conflict!

Paul suggested an interesting resource could be a book by Erin Mayer, which includes a chapter called ‘The most productive ways to disagree across cultures’, in ‘The Culture Map’.  He suggested the words: ‘that is really interesting’ which from an English person with a dry turn of phrase can have an idiomatic meaning contrary to its general meaning.


The meeting highlighted lots of useful ideas.  We then considered these in table discussions so the participants, (not including the virtual entrant – we let him retire early,) were able to pull together the ideas they had found useful.  NetIKX meetings always include a time for table discussions, so that people have a chance to embed the ideas in their own context and pick up ideas from networking with people from other workspaces.  In this case, we all considered what was the most useful tip from the meeting in our small groups.  We then amalgamated the ideas from all the groups into a main list and the voted for the best of all!  This was fun, and perhaps a little frustrating as the results were left for me to reveal in this article.  I will give our full list as they were all deemed useful.  Here are the TOP TEN in reverse order of popularity.

10.Consider security – don’t overlook this when tackling the technology issues.

9. Consider if the meeting needs to have small groups, or specific break-out groups.

8, Ensure the participants understand the established etiquette.

7. Ensure participants are confident and competent with the technology before the meeting starts.

6. Consider how the role of Chair will need to adapt to the virtual format.

5. Consider if you can build on face to face meetings to supplement the virtual ones.

4. Decide if you need to have two people taking lead roles: Chair of Content and Chair of Protocol?

Are you ready?  Drumroll please! Now for the top three:

3.Consider cultural issues as these may be emphasized and exacerbated by the virtual format

2. Preparation is vital: IT compatibility and time issues etc. need to be thought through.

Yes, in top place!

The recommendation that reminds us all that virtual meetings will ultimately have the same dynamic as any other meeting:

1. It is most essential to have a clear purpose and outcomes that are understood by all participants.


When the NetiKX meeting ended, the conversations did not.  Refreshments helped the chatter flow and we continued for a very satisfactory networking session with wine and soft drinks, finger food and chat.  All in all, it was a highly successful NetIKX meeting with a dazzling speaker and plenty of learning for all concerned.  I hope this summary of what went on has been useful for you.   If you want more here are three valuable resources:

Buy (or win) Paul Corney’s book:

Paul has a new book available to buy.  It is called: The KM Cookbook : Stories and strategies for organisations exploring Knowledge Management Standard ISO30401  By  Chris J. Collison , Paul J. Corney and  Patricia Lee Eng

NetIKX has two copies and is running competitions on their website for them. The first was won by one of our members who works with Plan International.  The next competition will be later in the Spring.  Watch out for this at The book is published by CILIP

Website resources linked to this meeting

Each seminar has a page on our website where we collect resources relevant to that meeting.  However, this may be for members only. Look at the page for January 2020.  This includes up-to-date information on Zoom, Microsoft Teams etc.

Issues Checklist

For our members, we have compiled a simple checklist, bringing together all the ideas from the meeting.  It could be a useful starting point for thinking through the issues so that you have expertise in identifying how to prepare for the best possible virtual meetings.

To join NetIKX and so gain access to this material, please go to our website and use the joining form – or alternatively come to our next seminar. This will be a virtual meeting using Zoom and led by someone with considerable experience in running virtual meetings, David Gurteen.  Please look at our website for details.   Contact us via the website for an opportunity to attend as our guest to enjoy a chance to talk with our members, as a taster to see what NetIKX could offer you or your organisation.   We look forward to you joining us then.

This article is compiled by Lissi Corfield, based on the presentation by Paul Corney and the contribution of attendees at the NetIKX seminar in January 2020.

Taxonomy Bootcamp 2019 – featuring our Book Prize Draw!



David Penfold at the Taxonomy Bootcamp

For some years, NetIKX has been an organisational supporter of the Taxonomy Boot Camp London event (TBCL) held at Olympia in the early Autumn. That means that we help to promote TBCL, especially to our members and social media followers, and in exchange we get a stall / table-top location to use to promote ourselves, two free tickets, and 30% off for our other members should they wish to take advantage.

This year we had a successful innovation – a Book Draw. It included two books: The KM Cookbook by Chris Collison, Paul Corney and Patricia Eng Lee, from Facet Publishing. Two of the authors have spoken at NetIKX seminars, so they are ‘family’.  The second book was The Knowledge Management Handbook, 2nd edition, by Nick Milton and Patrick Lambe, from Kogan Page. Note that Patrick was present at TBCL and delivered a tutorial session on the opening day. The Book Draw was popular with the authors and publishers and led to considerable interest in our stall. We look forward to the Bootcamp next year to bring more benefit to NetIKX members!

Our member, Anoushka Ferrari, was with Helen Lippell, organiser of the Taxonomy Bootcamp.

Anoushka Ferrari with Helen Lippell, organiser of the Taxonomy Bootcamp

Emma Hopkins won the KM Cookbook.

Emily Hopkins, winner of the KM Cookbook


Blog for July 2019: Content strategy

The speakers for the NetIKX meeting i July 2019 were Rahel Baillie and Kate Kenyon.  Kate promised they would explain what Content Strategy is and what it isn’t and how it relates to the work of Knowledge Management Professionals.   The two speakers came at this work from different backgrounds.  Rahel deals with technical systems while Kate trained as a journalist and worked at the BBC.

Managing content is different from managing data.   Content has grammar and it means something to people.  Data such as a number in a field is less complex to manage.  This is important to keep in mind because businesses consistently make the mistake of trying to manage content as if it were data.  Content strategy is a plan for the design of content systems. A content system can be an organic, human thing.  It isn’t a piece of software, although it is frequently facilitated by software.  To put together a content strategy, you have to understand all the potential content you have at hand.  You want to create a system that gives repeatable and reliable results.  The system deals with who creates content, who checks it, who signs it off and where it is going to be delivered.   The system must govern the management of content throughout its entire lifecycle.  The headings Analyse, Collect, Manage and Deliver can be useful for this.

Kate pointed out that if you are the first person in your organisation to be asked to look at content strategy, you might find yourself working in all these areas, but in the long run, they should be delegated to the appropriate specialists who can follow the Content Strategy plan.  In brief, the first part of content strategy is to assess business need, express it in term of business outcomes and write a business case.   It is part of the job to get a decent budget for the work!  When you have a clear idea of what the business wants to achieve, the next question is – where are we now?  What should we look at?  You will need to audit current content and who is producing it and why.  Assess the roles of everyone in the content lifecycle – not just writers and editors but also who commissions, creates and manages it as well as who uploads it, and archives it.   Then look at the processes that enable this.  Benchmark against standards to see if the current system is ‘good enough’ for purpose.  Define the scope of your audit appropriately.  The audit is not a deliverable, though vital business information may emerge.  It is to help you see priorities by perhaps doing Gap Analysis.  Then create a requirements matrix which helps clarify what is top priority and what not.

From this produce a roadmap for change and each step of the way keep the business on side.  A document signed off by a Steering Committee is valuable to ensure the priorities are acknowledged by all!

The discussion that followed considered the work in relation to staff concerns.  For example people might be scared at the thought of change, or worried about their jobs.   It was great to have such experienced speakers to meet concerns that were raised.  The meeting ended with Kate demonstrating some of the positive outcomes that could be achieved for organisations.  There is huge potential for saving money and improving public facing content.

This is taken from a report by Conrad Taylor.  See the full report on Conrad Taylor’s website: Content Strategy

Blog for May 2019 – Information Literacy

Information Literacy

Account by Conrad Taylor of a NetIKX meeting with Stéphane Goldstein and Geoff Walton of the CILIP Information Literacy Group — 30 May 2019.

I must preface this account of the May 2019 NetIKX seminar with a confession: I really hate the term ‘information literacy’ and can hardly bear typing the phrase without quote marks around it! I shall explain at the end of this report why I think the term is wrong and has negative connotations and consequences. In conversation with our speakers, I found that they largely agree with me – but in English, the term has stuck. I promise I’ll hold off the quote marks until I get to my postscript…

Stéphabe Goldstein and Geoff Walton
Our two ILG speakers, Stéphane Goldstein (left) and Geoff Walton (right).

Stéphane Goldstein: How CILIP redefined Information Literacy

We had two excellent speakers, Stéphane Goldstein and Geoff Walton. Stéphane led by establishing the background and explaining CILIP’s revised definition of Information Literacy, and Geoff reported a couple of practical research projects with young people.

Stéphane is an independent research consultant, with a strong interest in information and digital literacy. He acknowledged that those are contested terms, and he’d address that. With Geoff he’s been actively involved in CILIP’s Information Literacy Group (henceforth, ‘ILG’), one of 20 or so CILIP Special Interest Groups. His role in ILG is as its advocacy and outreach officer.

First he would tell us how ILG has developed its approach, and produced a new definition of Information Literacy (IL) now backed by CILIP as a whole. Secondly, he would tell us about recent political developments promoting IL and associated ‘literacies’, as important societal and public policy issues.
CILIP definition document

The CILIP definition of Information Literacy, as published in 2018.

About 15 years ago, CILIP developed its first definition of Information Literacy. This focused on knowing when and why you need information, where to find it, how to evaluate it and use it, and how to communicate it in an ethical way. At that time, the definition of IL (a term which had originated in the USA) was strongly articulated around academic skills and the work of academic librarians, even though the ideas had potential relevance to many other contexts. Information literacy at this time was defined by CILIP in terms of skill-sets, such as in searching, interpretation, and information-handling.

That older definition still has quite a lot of traction, but CILIP found it necessary to move on. For one thing, it’s now thought relevant to look at IL in other workplace environments.

In 2016, ILG began redefining IL. There was a lengthy consultative process, in several stages, and a draft definition was presented at the Librarians and Information Literacy annual conference, LILAC, in Nottingham in 2017 (see The draft was formally endorsed by CILIP at the end of 2017, and the new definition officially launched at LILAC 2018.
Stéphane had a few printed copies of the document with him, but it is readily available in PDF form on the CILIP Web site.

Defining IL

So what is different about the new definition? It is more complex than the old one, because of the need to apply the concepts to more contexts and situations. To keep it manageable, it is split into four parts. The headline definition – ‘Information literacy is the ability to think critically and make balanced judgements about any information we find and use’ – is designed to be ‘bite-sized’, and then gets fleshed out and exemplified.

The second part of the document sets things in a broader context. It talks about how IL relates to other ‘literacies’ such as ‘digital literacy’ and ‘media literacy’; and points out that IL is not just a bag of skills, but also concerns how those skills are applied. It describes the competencies, the knowledge, the confidence in applying IL in different contexts. The core of the definition is then related to five principal contexts for IL.

Conrad asked if definitions of all these derived ‘literacies’ are predicated on some foundational definition of what ‘literacy’ means, and Stéphane said — No. Indeed over the years, one of the problems that IL practitioners have had in getting the idea across is that the use of the term ‘literacy’ tends to throw up blockages to communication and comprehension. ‘Basic literacy’ of course is widely understood to mean the ability to read and to write, and perhaps to perform basic arithmetic tasks. Stéphane has heard people say that to use the ‘literacy’ word in relation to information-handling might be experienced as pejorative or counter-productive, in effect labelling some people as ‘illiterate’.
In some other languages, the concepts behind IL are labelled without reference to literacy – in French, it is ‘information mastery’ (la métrise à l’information). In Germany, they speak of Informationskompetenz (‘information competence’). In the English-speaking world, IL is the term we are stuck with for historical reasons – it’s how the concept was labelled in the USA when it emerged in 1974.

Contexts of IL application

The new IL definition refer to five sorts of lifelong situations:

Everyday life

Everyday life: CILIP says that IL has relevance to all of us, in many situations, throughout our lives. For example, it could be about knowing how to use the Internet for day-to-day transactions, such as online banking or shopping. It’s often assumed that people know how to do these things, but Stéphane reminded us that perhaps 20–25% of people lack confidence when faced with dealing with information online. Nor is there adequate access to training in these skills, either in schools or for adults.

Citizenship: These days we are beset with information of poor or dubious quality; misinformation and disinformation affect how we step up to our political responsibilities. There are IL skills involved in finding our way through the mazes of argument and alleged evidence in such matters as Brexit, climate change and so on. Judiciously picking through evidence and assertions are vital to the future of society – democratic societies in particular.

Education: This is the context where the original IL definition was at its strongest. Now we recognise it is important not just in Higher Education, but at all stages of the education cycle. ILG is concerned that school education does not teach these competencies adequately, but haphazardly, unless you are in the minority studying for the International Baccalaureat, or doing EPQs (Extended Project Qualifications, as Geoff would later explain).

If you lack prior experience in validating information, and bump into these issues for the first time at age 18 when you go to University (in the UK, about 40% of young people) — well, that’s rather too late, Stéphane thinks. There are also contexts for IL in lifelong education.

Workplace: In work settings, a lot of information needs to be dealt with, but it’s different from academic information. A lot of workplace information is vested in knowledge that colleagues have – also associates, even competitors. Working in teams presupposes an ability to exchange information effectively. Stéphane asked, how does IL skill contribute to a person’s employability?

Health: ‘Health literacy’ is increasingly important. With the NHS under pressure, people are expected to self-diagnose; but how you can find and evaluate credible information sources?
The CILIP team focused onto these five contexts as examples to keep the list manageable, but of course there are other contexts too.

The role of information professionals

The fourth and final part of the CILIP statement looks at the role ‘information professionals’ may have in helping to promote and teach and help citizens develop an understanding of IL. (In my postscript below I note that librarians tend to have a limited and one-sided notion of just who counts as an ‘information professional’.)

There have been savage cutbacks in the public library and school library sectors; and these environments are being deprofessionalised. What guiding role can be played by remaining qualified librarians, by library assistants, and by library volunteers? How can non-qualified library workers be helped to develop their appreciation of IL, to help them play a role as advocates? A definition framed around broader concepts might help school and public librarians in this task.

Stéphane thinks this redefinition is well timed, given contemporary concerns about the role in public life of disinformation. Fundamental democratic principles, which needs a level of trust between citizens and politicians and experts, are being undermined by discourses framed around flaky information. IL is one of the tools that can be of use here, though it is not the only tool.

In Stéphane’s view, the distinctions between information, digital, media literacy and so on are not that important. With digital literacy and media literacy in particular, there is a lot of overlap these days, as more media is delivered digitally. And we should admit that the term ‘information literacy’ has little currency in public discourse: it is used chiefly by librarians.
Recent illustrative developments in the policy sphere
CILIP definition document

A Parliamentary enquiry by the Digital Culture, Media and Sport Committee, looking into ‘Disinformation and “fake news”’, reported in February 2019 after 18 months of deliberation (read more and download it from here). Facebook in particular was subjected by the Committee to a great deal of criticism.

[Conrad notes — in contrast to the lambasting that Committee handed out to the social media platforms, very little was said about the disinformation and bias in the national British press and broadcasters…]

There is a chapter about ‘digital literacy’ in the Report, which says that ‘children, young adults and adults – all users if digital media – need to be equipped in general with sufficient digital literacy to be able to understand content on the Internet, and to work out what is accurate or trustworthy, and what is not.’

The Select Committee made a recommendation that ‘digital literacy should be the fourth pillar of education, alongside reading, reading and maths’. They called on the DCMS to co-ordinate with the Department for Education in highlighting proposals to include digital literacy as part of the physical, social, health and economic curriculum (PSHE). The Government, however, rejected this recommendation, claiming they are doing it already (they are not). CILIP went to talk to DfE in Spring 2019, and were told that there would be no review of the school curriculum before the next Parliament.

The Cairncross Review was commissioned to look at the future of the UK news industry, and reported at about the same time as the other DCMS committee. Amongst its observations were that online platforms’ handling of news should be placed under regulatory supervision, and it introduced the concept of ‘public interest news’. [Download the report]

That report uses the terms ‘media literacy’ and ‘critical literacy’ and echoes the Select Committee’s recommendations in calling for these skills to be promoted. It called of the Government to develop a ‘media literacy strategy’ and to identify gaps in provision and engage with all stakeholders. That recommendation has been adopted by government. This initiative came from the world of journalism, not the world of librarianship.

In April 2019 a White Paper on ‘Online Harms’ was published to initiate a consultation which has now closed (see The paper set out the government’s plans for a package of measures to keep UK users safe online, especially users defined as vulnerable.

The government uses a broad definition of what it means by ‘online harms’ — child sexual exploitation and abuse, terrorist content, content illegally uploaded from prisons, online sale of drugs, cyber-bullying, [encouraging] self-harm and suicide, under-age sharing of sexual imagery, and finally disinformation and manipulation. It also talks about online abuse of public figures.

Primarily the government’s White Paper aims to strengthen the regulatory environment, but it does have a nine-page sub-chapter on ways of empowering users. That section is mostly about education, and says, ‘Media and digital literacy can equip users with the skills they need to spot dangers online, to critically appraise information, and to take steps to keep themselves and others safe online. It also has wider benefits, including for the functioning of democracy, by giving users a better understanding of online content and enabling them to distinguish between facts and opinion.’

Like the Cairncross Review, the White Paper envisages the development of a National Media Literacy Strategy – which will probably take a while to evolve. It explicitly identifies librarians as partners in the development of that strategy – so perhaps CILIP’s approaches to government have not been in vain.

Stéphane expressed satisfaction that the White Paper recognises it as a serious problem when people are unable to distinguish between facts and opinions.
Measures on Health Literacy

At the end of 2017, NHS Health Education England and other bodies launched a Health Literacy Toolkit, ‘to help health staff tackle the challenges caused by low levels of health literacy, and improve health outcomes’. As the news release stated, ‘According to the Royal College of General Practitioners, health information is too complex for more than 60% of working age adults to understand, which means that they are unable to effectively understand and use health information.’ Interestingly, the toolkit aims to improve the ability of health professionals to explain health issues effectively. It was piloted in the East Midlands.

(Unfortunately, in a classic example of public sector ‘link rot’, all Health Education England URL references to the toolkit are dead-ends after just 20 months.)
IL considered in Europe

The European Commission also commissioned a report on fake news and disinformation, which offered important proposals in relation to media and information literacy. Some of the proposals are again in the educational realm, and it says ‘Media and information literacy has become an essential competence, as it is the starting point for developing critical thinking and good personal practice for discourse online, and also consequently in the offline world.’

The recommendations included better recognition of media and information literacy as key subjects for school curricula across Europe. The report also recommended that teacher training colleges incorporate media and information literacy as part of teachers’ own training and life-long learning.
Interventions in the UK (and Ireland)

Many organisations in the UK are showing an interest in IL, and the ILG has had dealings with many of them. Within DCMS there is the clumsily-named ‘Counter Online Manipulation of Security and Online Harms Directorate’ – a rather poorly resourced section, co-ordinating government policy on disinformation.

Then there is Ofcom, the communications regulator, part of its role being ‘to chart public perceptions and understanding of media literacy in the UK.’ Their understanding of media literacy is rather broad, and relates to the media environment.

The National Literacy Trust is a charity, chiefly interested in promoting basic literacy (reading and writing), but also has a set of resources on critical literacy, fake news and disinformation. You can read more and create a login to access free teaching resources, from here.

‘Newswise’ is a pilot project developed under the auspices of The Guardian newspaper, with Google funding. It helps to develop news literacy amongst primary school children. (See June 2018 Guardian article about Newswise.)

Ireland is interesting: the >Media Literacy Ireland initiative is backed by the Broadcasting Authority of Ireland ( and it has brought some 40 key players together, including the national broadcaster RTÉ, the Press Council of Ireland, Sky, Facebook and Google, and the Library Association of Ireland. Stéphane thinks that it helps that Ireland’s population is only five million; in a country with 65m, it could be much more difficult.

Another challenge which Stéphane noted is, with many organisations showing an interest in these issues, how to ensure join-up and concerted activity? Internationally, UNESCO does some work in this area but it doesn’t have a lot of influence. The EU has shown an interest, but has no statutory powers in this area; its activity has been limited to research and reports.

Geoff Walton: improving information discernment

Geoff reported on research work funded by the Information Literacy Group and the British Academy. This work is interesting both for its cross-disciplinary approach and scientific components, and its trialling of practical educational interventions. The team included Jamie Barker at Loughborough University, Matt Pointon at Northumbria University, and Martin Turner and Andy Wilkinson from Staffordshire University. Geoff himself is a senior lecturer in the Department of Information and Communication at Manchester Metropolitan University.

Geoff reminded us again of the CILIP 2018 definition of Information Literacy as ‘the ability to think critically and make balanced judgements about any information we find and use’, which ‘empowers us as citizens to develop informed views and to engage fully with society.’ Geoff and colleagues focused on the bit about making informed judgements about information that was presented to the test subjects.

He also said that rather than using the term ‘balanced’, he’d prefer to say ‘well calibrated’. There seems to be a problem with the idea of balance — note how badly the BBC has misjudged the practical consequences of fetishising ‘balance’ between opposed views, notably on the Today programme, and Question Time.
The concept of information discernment

Some people are good at making well-calibrated judgements about information, and others do it poorly. The studies considered these as gradations in information discernment. The differences affect people’s responses when exposed to mis-information – emotionally, cognitively and even physiologically.

Many maps showed the presumed location of the (mythical) island of ‘Hy-Brasil’ west or Ireland — though they often disagreed about just where it was!

The research explored this in a case study with young people 18–21 years of age. Further research with a cohort of young people aged 16–17 also found that there are ways in which we can help the ‘low discerners’ to become ‘high discerners’, with the right kind of training. Geoff would report on both.

Dis-information and mis-information are nothing new. Geoff referred to a persistent myth of the existence of a fog-shrouded island west of Ireland known as ‘Hy-Brasil’ (possibly from Irish Uí Breasail). Belief in this inspired several expeditions, and the island was marked on maps from 1325 onwards, even as late as 1865. Geoff also referred to the contemporary nutritionist Gillian McKeith, author of ‘You Are What You Eat’, and her string of bogus qualifications (see Wikipedia: the tale of how scientist Ben Goldacre obtained one of the same diplomas for his dead cat will amuse you).

Sometimes the issue is about how information is presented. Geoff referred to a BBC News report in the 1960s about a pay dispute at Monkwearmouth Pit near Sunderland, where his dad was a miner. The BBC claimed miners were taking home £20 a week. Geoff’s dad was furious at this deception: he was taking home just £15 a week. What the BBC had done was to take an average from the top to the bottom of the pay scale.

Characteristics of high information discerners — people who tend to exhibit high levels of information discernment also display these characteristics:

they are curious about the world

they are sceptical about information presented to them by search engines
they will consult more than one information source [deliberately doing so is sometimes called ‘triangulation’, with reference to the practices of land surveying]
They recognise that it is important to check the credentials of the author or other source
if they encounter contradictory information, they tend to consider it also, rather than ignore it
they display higher levels of attention to information, as shown in the eye-tracking elements of the research

In contrast, people who exhibit low levels of information discernment are significantly less likely to be aware of these issues, and are generally inattentive to the detail of information they encounter.
It’s in the blood

Geoff’s perceptions around these issues were widened by a chance conversation with some Manchester colleagues in the field of the psychology of sport and exercise. These researchers are interested in how people deal with stress, and they operate a model of challenge and threat. This then reminded Geoff of an information behaviour model proposed by Tom Wilson in the Journal of Documentation in 1999 (‘Models in information behaviour research’), in which he identifies a role for the psychological stress/coping mechanism. Out of this encounter, an interdisciplinary twist to the research was developed.

In a stressful situation, some people regard the stressor as a challenge and respond to it adaptively, but if you experience it as a threat, your response is likely to be maladaptive and unhelpful to yourself. As Geoff put it, standing before the NetIKX audience, he felt inspired to rise to the challenge, but if his response were maladaptive his throat might tighten, he might panic and fluff his lines.

Physiologically, any positive response to stress goes along with a measurable dilation (widening) of the blood vessels, especially the arteries and major veins, increasing blood flow, including of course to the brain. But in the case of maladaptive responses, the experience of threat results in the opposite, vasoconstriction, which restricts blood flow.

The research team therefore investigated whether there was a link between their ‘information discernment’ model, and measurable physiological reactions to misinformation.
The lab space

The research team set up a laboratory space with various instruments. One side was equipped with a ‘Finometer® PRO’, a non-invasive monitoring device which puts a small pressure cuff around the test subject’s finger and uses transmitted light to examine blood flow on a continuous beat-to-beat basis, reporting on blood pressure, pulse rate etc. (See The other side of the lab featured an eye-tracking system, which Geoff didn’t describe in detail, but he later showed us the kind of ‘attention heat map’ display it produces.

The team got their subjects to do various tasks. One task was a word-search which was actually impossible to complete satisfactorily (this reminds me of the famous Star Trek Kobiyashi Maru no-win scenario, a test of character). Obviously, not having a solution generates some mild stress. They also told the test subjects that they were helping another student, referred to as ‘the confederate’, to win £100 (this was a fib). Some participants were additionally told that the ‘confederate’ they were thus helping was a person of extreme religious views (again, not true, but a way of winding up the stress levels).

With the test completed, the test subjects were then taken through a PANAS self-reporting mood-assessing questionnaire (Positive And Negative Affect Schedule), and then the subjects were fully and honestly debriefed.

Results: There does seem to be a relationship between the level of information discernment and how subjects reacted to stress. Those whom the team classified as exhibiting high information discernment tended to react to stress as a ‘challenge’, rather than treating it like a threat to their well-being. They exhibited more efficient blood flow, and a healthier, better-adapted heart response. Also – this came as something of a surprise to the team – high information discernment individuals responded with more positive emotions in the PANAS assessment.
The information search task

Two of the heat maps generated by the eye-tracking equipment, showing how much attention different participants gave to different parts of the information display. That on the left was measured from someone the team considered to exhibit the characteristics of ‘high information discerner’, the one on the right, from a ‘low information discerner’.

Another task for the test subjects was to look at a page of information, made from text and data graphics sourced from The Guardian, headed ‘Religious extremism main cause of terrorism, according to report’ (it’s a rearrangement of material from a November 2014 article).

Geoff displayed a couple of ‘heat maps’ that were generated from the eye tracking monitor system, showing the extent to which different parts of the ‘page’ received the most attention from different participants: see image. The left example belonged to a ‘high discerner’, and the other to a ‘low discerner’. Admittedly, the two examples Geoff showed us were at the extremes of those observed.

Delving deeper, Geoff said that the ‘high discerner’s’ saccadic movements (this means, the sequential leaps between eye-fixation points) were measurably more ordered; this person also extensively examined the graphical components, which were completely ignored by the low discerner.

Conrad commented that there are particular skills involved in interpreting data graphics, which could make it a complicating factor in this study. It may be that the individual whose heat-map was displayed on the right does not have those skills, and therefore ignored the data graphics. Of course, lack of that skill might correlate with ‘low information discernment capability’ generally, and it doesn’t seem that unlikely, but it’s not a foregone conclusion.

From this part of the study, the research team concluded:

People with low information discernment capabilities are more likely to experience negative physical and emotional reactions [as tested by the Finometer and the PANAS review] when looking for information. This is exacerbated if the information they are presented with is contradictory.
‘Low discerners’ are less likely to attend to detail, and are unlikely to check the sources of information, or the credentials of the writer or other person telling them the information.

Can we help develop information discernment?

If it possible to move low information discerners to become high information discerners? The good news is, the team found that they could, and they offer four rules of thumb:

In such training, you need to match a person’s context as closely as you can, such as what people are typically doing and trying to find out. In other words, ‘generic’ forms of training won’t work.
The training needs to be informal and conversational.
The training should be collaborative.
It should be practical, not theoretical.

Geoff passed around some copies of materials which they prepared for the training workshop with a group of school students aged 16–17. He noted that similar results have also been recorded with first-year university students.

In this case, the subjects were given two pieces of information about smoking. One was a company Web site, the other was about research from a university. Trainees noticed things of which they were previously oblivious, such as citations and sources, and they developed an appreciation of the need to consider whether information sources are reliable.

This part of the work, funded by the British Academy, was aimed at helping a particular school deliver their Extended Project Qualification. The EPQ is an optional qualification within the National Qualifications Framework in England and Wales, taken after A-Levels, and involves producing a dissertation of about 5,000 words or some similar artefact, plus documentation showing research and thinking processes. (It is similar to the Scottish Sixth Year Studies qualification which was available from 1968 to 2000.) The EPQ assignment is probably the first time that school students have to do something where the information has not been handed to them on a plate – they have to find it out for themselves, and it is a good introduction to what they’ll meet at university.

Teachers involved in this workshop project noted a real shift in information behaviours as a result – no longer was there a passive acceptance of information, and students started to question the credibility of sources. The only problem, said Geoff, is that they can go too far and question everything, and you have to lead them back to an understanding that there are certain facts out there!

The project conclusion is that by training low information discerners, you can help them construct their own cognitive firewalls – which is better than relying on ‘machines of loving grace’ to protect us with filters and AI algorithms. We may hope that by encouraging people to have the cognitive habits and capabilities of high information discerners, they will be less susceptible to the unsubstantiated claims which permeate the Internet, and especially social media.

Stéphane and Geoff would love to work with other groups and constituencies to help mainstream Information Literacy, and would like to hear ideas for joint events and case studies.

Discussion summaries

After a break we assembled into table groups, and each table discussed a different aspect of the issue. Lissi Corfield then called on the groups to report back to a brief plenary session, and I report some of the points below:

On trusting information sources: one participant remarked that her window-cleaner had said that he got all his knowledge of what was going on in the world from Facebook. Initially that thought might inspire horror; but on the other hand, we’ve always trusted people we know as intermediates; is this so different?
IL in the workplace: the table considering this topic agreed that the tools and methods described by Geoff (and used in schools) could also find application in the workplace. But it would be lovely not to have to teach newcomers coming in from school and university about IL – they should have learned it already!
Logic and education: Another table, from which Dion Lindsay reported, thought that Logic, currently taught in the first year of university Philosophy courses, should have a place in secondary school education, as a foundation for thought. (I commented, ‘Bring back the mediaeval curriculum!’, in which Grammar, Logic and Rhetoric were the three foundational liberal arts subjects – the trivium.)
Self-awareness: a fourth table group commented that there is value in helping people to be more aware of, and critical of, their own thought processes.
Awareness of how search engines work – for example, why Google searches result in a particular ranking order of results – should help to foster a more critical approach to this information gateway.

PostScript: my discomforts with the ‘IL’ term
Origins of the ‘information literacy’ term
The Zurkowski document

Paul Zurkowsky’s 1974 paper — where the term ‘information literacy’ was coined.

It was from Geoff that I learned that the term ‘information literacy’ was coined by Paul G. Zurkowski in 1974, in a report for the [U.S.] National Commission on Libraries and Information Science.

Indeed, Geoff was so kind as to email me a copy of that paper, as a PDF of a scanned typescript — a section of the cover page is shown right. In the USA, the term seems to have been picked up by people involved in Library and Information Studies (LIS) and in educational technology.

I personally first encountered the IL term in January 2003, at a workshop hosted by the British Computer Society’s Developing Countries Specialist Group, in the run-up to the UN World Summit on the Information Society. The WSIS secretariat had been pushing the idea of ‘Information Literacy’. John Lindsay of Kingston University chaired the meeting.

A temple-like model of ‘Seven Pillars of Information Literacy’ had been dreamed up four years earlier by the Society of College, National and University Libraries (SCONUL), and two LIS lecturers from London Metropolitan University were promoting it. A full account of that workshop is available as PDF.>
SCONUL model of IL: seven pillars

SCONUL’s ‘Seven Pillars of Information Literacy’, redrawn by Conrad Taylor.

I took an immediate dislike to the term. Everyone agrees that true literacy, the ability to read and to write, is a fundamental skill and a gateway to learning; and this smelt to me like a sly attempt to imply that librarians, with their information retrieval and analysis skills, were as valuable to society as teachers.

In September 2003 I ran a conference called Explanatory and Instructional Graphics and Visual Information Literacy. My preparatory paper ‘New kinds of literacy, and the world of visual information’ is available from this link.

Also, proper literacy requires skills in consumption (reading) and production (writing), but in my experience librarians consistently don’t say much to recognise the skills used by those making information products: writers and illustrators, graphic designers, documentary film producers, encyclopaedists and so on.

There is a term ‘visual literacy’ as featured since 1989 in the journal of the same name and the International Visual Literacy Association, but I am prepared to cut that use of the L-word more slack because IVLA and the journal are concerned about design as well as consumption.

As for the implication that Stéphane referred to, that someone having difficulty in finding and making critical judgements about information is in effect being shamed as ‘illiterate’, I confess that objection hadn’t occurred to me.
Critique the object/system, not the hapless user?

In the field of Information and Communication Design, where I have been a practitioner for 30+ years, our professional approach to failures of communication or information discovery is not to default to blaming the reader/viewer/user, but to suspect that the information artefact or system might not have been designed very well. It’s just not on to react to such problems by claiming the public is ‘information illiterate’, when the problem is your illiterate inability to communicate clearly.

Some weeks later, while at the ISKO UK conference, it also occurred to me that people who work to improve the way information sources are organised, and work on improving how they are documented and accessed, share this attitude with Information Designers. They also do not accuse people of ‘illiteracy’ – they address the lack of coherence in the information. Is there something about librarian culture that resists deconstructing and critiquing information products and systems, and leaves them inviolate on a pedestal?

Anyway, I’ll draw my small tirade to a close! I do agree that there is A Problem, and I largely agree with Stéphane and Geoff about the shape of that problem. I shall continue to place quote marks around ‘information literacy’ and contest the use of the term. I like the German alternative that Stéphane mentioned, Informationskompetenz. I think I shall refer to ‘information handling competences’ in the plural, as my Preferred Term.

— Conrad Taylor, July 2019

Blog for March 2019 Seminar: Open Data

The speaker at the NetIKX seminar in March 2019 was David Penfold, a veteran of the world of electronic publishing who also participates in ISO committees on standards for graphics technology.  He has been a lecturer at the University of the Arts London and currently teaches Information Management in a publishing context.

David’s talk looked at the two aspects of Open Data.  The most important thing for us to recognise is Data as the foundation and validation of Information.  He gave a series of interesting historical examples and pointed out that closer to the present day, quantum theory, relativity and much besides all developed because the data that people were measuring did not fit the predictions that earlier theoretical frameworks suggested.  A principle of experimental science is that if the data from your experiments don’t fit the predictions of your theories, it is the theories which must be revisited and reformulated.

David talked about some classificatory approaches. He mentioned the idea of a triple, where you have an entity, plus a concept of property, plus a value.  This three-element method of defining things is essential to the implementation of Linked Data.  Unless you can stablish relationships between data elements, they remain meaningless, just bare words or numbers.  A number of methods have been used to associate data elements with each other and with meaning.  The Relational Database model is one.  Spreadsheets are based on another model and the Standard Generalised Markup Language (and subsequently XML) was an approach to giving structure to textual materials.  Finally, the Semantic Web and the Resource Description Framework have developed over the last two decades

Moving on to what it means for data to be Open.  There are various misconceptions around this – it does not mean Open Access, a term used within the worlds of librarian ship and publishing to mean free-of-charge access, mainly to academic journals and books.  We are also not talking about Open Archiving, which has a close relationship to the Open Access concept.  Much of the effort in Open Archiving goes into developing standardised metadata so that archives can be shared.  Open data is freely available.  It is often from government but could be from other bodies and networks and even private companies.

We then watched a short piece of video showing Sir Nigel Shadbolt, in 2012, who was a founder of the Open Data Institute, which set up the open data portals for the UK government.  He explains how government publication of open data, in the interests of transparency is not found in many countries and at national, regional and local level.  The benefits include improved accountability, better public services, improvement in public participation, improved efficiency, creation of social value and innovation value to companies.

We heard about examples of Open Data, for example Network Rail publishes open data and benefits through improvements in customer satisfaction.  It says that its open data generates technology related jobs around the rail sector and saves costs in information provision when their parties invest in building information apps based on that data.    The data is used by commercial users too, but also the rail industry and Network Rail itself.   The data can also be accessed by individuals and academia.

Ordnance Survey open data is important within the economy and in governance.  David uses one application in his role as Chair of the Parish Council in his local village. The data allows them to see Historic England data for their area, and Environment Agency information showing sites of special scientific importance or areas of outstanding natural beauty.

After the tea-break, David showed three clips from a video of a presentation by Tim Berners-Lee.  David then explained how the Semantic Web works.  It is based on four concepts: a) metadata; b) structural relationships; d) tagging; d) the Resource Description Framework method of coding which in turn is based on XML.

The Open Data Institute has developed an ‘ethics canvas’, which we looked at to decide what we thought about it.  It gives a list of fifteen issues which may be of ethical concern.  We discussed this in our table groups and this was followed by a general discussion.  There were plenty of examples raised from our collective experience, which made for a lively end to the seminar.

This is taken from a report by Conrad Taylor

To see the full report follow this link: Conradiator : NetIKX meeting report : Open Data

Blog for January 2019: Wikipedia & knowledge sharing

In January 2019, NetIKX held a seminar on the topic – Wikipedia and other knowledge-sharing experiences.  Andy Mabbett gave a talk about one of the largest global projects in knowledge gathering in the public sphere; Wikipedia and its sister projects.  Andy is an experienced editor of Wikipedia with more than a million edits to his name.  He worked in website management and always kept his eyes open for new developments on the Web.  When he heard about the Wikipedia project, founded in 2001, he searched there for information about the local nature reserves.  He is a keen bird-watcher.  There was nothing to be found and this inspired him to add his first few entries.  He has been a volunteer since 2003 and makes a modest living with part of his income stream coming from training and helping others become Wikipedia contributors too.  The volunteers are expected to write publicly accessible material, not create new information.  The sources can be as diverse and scattered as necessary, but Wikipedia pulls that information together coherently and give links back to the sources.

The Wikipedia Foundations which hosts Wikipedia says: ‘imagine a world in which every single human being can freely share in the sum of all knowledge.  That is our commitment.’

Wikipedia is the free encyclopaedia that anybody can edit.  It is built by a community of volunteers contributing bit by bit over time.  The content is freely licensed for anybody to re-use, under a ‘creative commons attribution share-alike’ licence.  You can take Wikipedia content and use it on your own website, even in commercial publications and all you have to do in return is to say where you got it from.  The copyright in the content remains the intellectual property of the people who have written it.

The Wikimedia Foundation is the organisation which hosts Wikipedia.  They keep the servers and the software running.  The Foundation does not manage the content.  It occasionally gets involved over legal issues for example, child protection but otherwise they don’t set editorial policy or get involved in editorial conflicts.  That is the domain of the community.

Guidelines and principles.

Wikipedia operates according to a number of principles called the ‘five pillars’.

  • It is an encyclopaedia which means that there are things that it isn’t: it’s not a soap box, nor a random collection of trivia, nor a directory.
  • It’s written from a neutral point of view, striving to reflect what the rest of the world says about something.
  • As explained, everything is published under a Creative Commons open license.
  • There is a strong ethic that contributors should treat each other with respect and civility. That is the aim, although Wikipedia isn’t a welcoming space for female contributors and women’s issues are not as well addressed as they should be.  There are collective efforts to tackle the imbalance.
  • Lastly there is a rule that there are no firm rules! Whatever rule or norm there is on Wikipedia, you can break it if there is a good reason to do so.  This does give rise to some interesting discussions about how much weight should be given to precedent and established practice or whether people should be allowed to go ahead and do new and innovative things.

In Wikipedia, all contributors are theoretically equal and hold each other to account. There is no editorial board, there are no senior editors who carry a right of overrule or veto.  ‘That doesn’t quit work in theory’ says Andy, ‘but like the flight of the bumblebee, it works in practice’.  For example, in September 2018, newspapers ran a story that the Tate Gallery had decided to stop writing biographies of artists for their Website.  They would use copies of Wikipedia articles instead.  The BBC does the same, with biographies of musicians and bands on their website and also with articles about species of animals.  The confidence of these institutions comes because it is recognised that Wikipedians are good at fact-checking and that if errors are spotted or assertions made without a supporting reliable reference they get flagged up.   But there are some unintended consequences too.  Because dedicated Wikipedians have the habit of checking articles for errors and deficits, Wikipedia can be a very unfriendly place for new and inexperienced editors.  A new article can get critical ‘flags to show something needs further attention.  People can get quite zealous about fighting conflicts of interest, or bias or pseudo-science.

For most people there is just one Wikipedia.  But there are nearly 300 Wikipedias in different languages.  Several have over a million articles, some only a few thousand. Some are written in a language threatened with extinction and they constitute the only place where a community of people is creating a website in that language, to help preserve it as much as to preserve the knowledge.

Wikipedia also has a number of ‘sister projects’.  These include:

  • Wiktionary is a multi-lingual dictionary and thesaurus.
  • Wikivoyage is a travel guide
  • Wikiversity has a number of learning models so you can teach yourself something.
  • Wikiquote is a compendium of notable and humorous quotations.

Probably the Wikidata project is the most important of the sister projects, in terms of the impact it is having and its rate of expansion.  Many Wikipedia articles have an ‘infobox’ on the right side.  These information boxes are machine readable as they have a microformat mark-up behind the scenes.  From this came the idea of gathering all this information centrally.  This makes it easier to share across different versions of Wikipedia and it means all the Wikipedias can be updated together, for example, if someone well known dies.  Under their open licence, data can be used by any other project in the world.  Using the Wikidata identifiers for millions of things, can help your system become more interoperable with others.   As a result, there is a huge asset of data including that taken from other bodies (for example English Heritage or chemistry databases etc.

Wikipedia has many more such projects that Andy explained to us and the information was a revelation to most of us.  So we were then delighted to spend some time looking at an exercise in small groups.  This featured two speakers who talked about the way they had used a shared Content Management system to gather and share knowledge.  These extra speakers circulated round the groups to help the discussions.  The format was different to NetiKX usual breakout groups but feedback from participants was very positive.

This blog is based on a report by Conrad Taylor.

To see the full report you can follow this link: Conradiator : NetIKX meeting report : Wikipedia & knowledge sharing


Blog for the November 2018 seminar: Networks

The rise of on-line social network platforms such as Facebook has made the general population more network-aware. Yet, at the same time, this obscures the many other ways in which network concepts and analysis can be of use. Network Science was billed as the topic for the November 2018 NetIKX seminar, and in hopes that we would explore the topic widely, I did some preliminary reading.

I find that Network Science is perhaps not so much a discipline in its own right, as an approach with application in many fields – analysis of natural and engineered geography, transport and communication, trade and manufacture, even dynamic systems in chemistry and biology. In essence, the approach models ‘distinct elements or actors represented by nodes (or vertices) and the connections between [them] as links (or edges)’ (Wikipedia), and has strong links to a branch of mathematics called Graph Theory, building on work by Euler in the 18th century.

In 2005, the US National Academy of Sciences was commissioned by the US Army to prepare a general report on the status of Network Science and its possible application to future war-fighting and security preparedness: the promise was, that if the approach looked valuable, the Army would put money into getting universities to study the field. The NAS report is available publicly at and is worth a read. It groups the fields of application broadly into three: (a) geophysical and biological networks (e.g. river systems, food webs); (b) engineered networks (roads, electricity grid, the Internet); and (c) social networks and institutions.

I’ve prepared a one-page summary, ‘Network Science: some instances of networks and fields of complex dynamic interaction’, which also lists some further study resources, five books and an online movie. (Contact NetIKX if you want to see this). In that I also note: ‘We cannot consider the various types of network… to be independent of each other. Amazon relies on people ordering via the Internet, which relies on a telecomms network, and electronic financial transaction processing, all of which relies on the provision of electricity; their transport and delivery of goods relies on logistics services, therefore roads, marine cargo networks, ports, etc.’

The NetIKX seminar fell neatly into two halves. The first speaker, Professor Yasmin Merali of Hull University Business School, offered us a high-level theoretical view and the applications she laid emphasis on were those critical to business success and adaptation, and cybersecurity. Drew Mackie then provided a tighter focus on how social network research and ‘mapping’ can help to mobilise local community resources for social welfare provision.

Drew’s contribution was in some measure a reprise of the seminar he gave with David Wilcox in July 2016. Another NetIKX seminar which examined the related topics of graph databases and linked data graphs is that given by Dion Lindsay and Dave Clarke in January 2018.

Yasmin Merali noted that five years ago there wasn’t much talk about systems, but now it is commonplace for problems to be identified as ‘systemic’. Yet, ironically, Systems Thinking used to be very hot in the 1990s, later displaced by a fascination with computing technologies. Now once again we realise that we live in a very complex and increasingly unpredictable world of interactions at many levels; where the macro level has properties and behaviours that emerge from what happens at the micro level, without being consciously planned for or even anticipated. We need new analytical frameworks.

Our world is a Complex Adaptive System (CAS). It’s complex because of its many interconnected components, which influence and constrain and feed back upon each other. It is not deterministic like a machine, but more like a biological or ecological system. Complex Adaptive Systems are both stable (persistent) and malleable, with an ability to transform themselves in response to environmental pressures and stimuli – that is the ‘adaptive’ bit.

We have become highly attuned to the idea of networks through exposure to social media; the ideas of ‘gatekeepers’, popularity and influence in such a network are quite easy to understand. But this is selling short the potential of network analysis.

In successful, resilient systems, you will find a lot of diversity: many kinds of entity exist and interact within them. The links between entities in such systems are equally diverse. Links may persist, but they are not there for ever, nor is their nature static. This means the network can be ‘re-wired’, which makes adaptation easier.

Amazing non-linear effects can emerge from network organisation, and you can exploit this in two ways. If adverse phenomena are encountered, the network can implement a corrective feedback response very quickly (for example, to isolate part of the network, which is the correct public health response in the case of an epidemic). Or, if that reaction isn’t going to have the desired effect, we can try to re-wire the network, dampening some feedback loops, reinforcing others, and thus strengthening those ‘constellations’ of links which can best rise to the situation.

Information flows in the network. Yasmin offered us as analogy, the road network system, and distinct to that, the traffic running across that network. People writing about the power of social media have been concentrating on the network structure (the nodes, and the links), but not so much on the factors which enable or inhibit different kinds of dynamic within that structure.

Networks can enable efficient utilisation of distributed resources. We can also see networks as the locus where options are generated. Each change in a network brings about new conditions. But the generative capacity does come at a cost: you must allow sufficient diversity. Even if there are elements which don’t seem useful right now, there is a value in having redundant components: that’s how you get resilience.

You might extend network thinking outwards, beyond networking within one organisation, towards a number of organisations co-operating or competing with each other. Some of your potential partners can do better in the current system and with their resources than you; in another set of circumstances, it might be you who can do better. If we can co-operate, each tackling the risks we are best able to cope with, we can spread the overall risk and increase the capability pool.

Yasmin referred to the idea of ‘Six Degrees of Separation’ – that through intermediate connections, each of us is just six link-steps away from anybody else. The idea was important in the development of social network theory, but it turns out to have severe limitations, because where links are very tenuous, the degree of access or influence they imply can be illusory. That’s why simplistic social network graphs can be deceptive.

In a regular ‘small worlds’ network, everyone is connected to the same number of people in some organised way, and even one extra random link shortens the path length. It’s possible to ‘re-wire’ a network to get more of these small-world effects, with the benefit of making very quick transitions possible.

But there is another kind of network, similar in structure to the Internet and most of the biological systems we might consider – and that’s what we can call the ‘scale-free’ network. In this case, there is no cut-off limit to how large, or how well-connected a node can be.

Networks are also ‘lumpy’ – in large networks, there are very large hubs, but also adjacent less-prominent hubs, which in an Internet scenario are less likely to be attacked or degraded. This gives some hope that the system as a whole is less likely to be brought to its knees by a random attack; but a well-targeted attack against the larger hubs can indeed inflict a great deal of damage. This is something that concerns security-minded designers of networks for business. It is strategically imperative to have good intelligence about what is going on in a networked system – what are the entities, which of them are connected, and what is the nature of those connections and the information flows between them.

It’s important to distinguish between resilience and robustness. Resilience often comes from having network resources in place which may be redundant, may appear to be superfluous or of marginal value, but they provide a broader option space and a better ability to adapt to changing circumstance.

Looking more specifically at social networks, Yasmin referred to the ‘birds of a feather flock together’ principle, where people are clustered and linked based on similar values, aspirations, interests, ways of thinking etc. Networks like this are often efficient and fast to react, and much networking in business operates along those lines. However, within such a network, you are unlikely to encounter new, possibly valuable alternative knowledge and ways of thinking.

Heterogeneity of linkages may propagate along weaker links, but are valuable for expanding the knowledge pool. Expanded linkages may operate along the ‘six degrees’ principle, and through intermediate friends-of-friends, who serve both as transmitters and as filters. And yet a trend has been observed for social network engines (such as Facebook) to create a superdominance of ‘birds of a feather’ types of linkages, leading to confirmation bias and even polarisation.

In traditional ‘embodied’ social networks, people bonded and transacted with others whom they knew in relatively persistent ways, and could assess through an extended series of interactions in a broadly understandable context. In the modern cybersocial network, this is more difficult to re-create, because interactions occur through ‘shallow’ forms such as text and image – information is the main currency – and often between people who do not really know each other.

Another problem is the increased speed of information transfer, and decreased threshold of time for critical thought. Decent journalism has been one of the casualties. Yes, ‘citizen journalism’ via tweet or online video post can provide useful information – such informants can often go where the traditional correspondent could not – but verification becomes problematic, as does getting the broader picture, when competition between news channels to be first with the breaking story ‘trumps’ accuracy and broader context.

If we think of cybersocial networks as information networks, carrying information and meaning, things become interesting. Complexity comes not just from the arrangement of links and nodes, but also from the multiple versions of information, and whether a ‘message’ means the same to each person who receives it: there may be multiple frameworks of representation and understanding standing between you and the origin of the information.

This has ethical implications. Some people say that the Internet has pushed us into a new space. Yasmin argues that many of the issues are those we had before, only now more intensely. If we think about the ‘gig economy’, where labour value is extracted but workers have scant rights – or if we think about the ownership of data and the rights to use it, or surveillance culture – these issues have always been around. True, those problems are now being magnified, but maybe that cloud has a silver lining in forcing legislators to start thinking about how to control matters. Or is it the case that the new technologies of interaction have embedded themselves at such a fundamental level that we cannot shift them?

What worries Yasmin more are issues around Big Data. As we store increasingly large, increasingly granular data about people from sources such as fitbits, GPS trackers, Internet-of-Things devices, online searches… we may have more data, but are we better informed? Connectivity is said to be communication, but do we understand what is being said? The complexity of the data brings new challenges for ethics – often, you don’t know where it comes from, what was the quality of the instrumentation, and how to interpret the data sets.

And then there is artificial intelligence. The early dream was that AI would augment human capability, not displace it. In practice, it looks as if AI applications do have the potential to obliterate human agency. Historically, our frameworks for how to be in the world, how to understand it, were derived from our physical and social environment. Because our direct access to the physical world and the raw data derived from it is compromised, replaced by other people’s representation of other people’s possible worlds, we need to figure out whose ‘news’ we can trust.

When we act in response to the aggregated views of others, and messages filtered through the media, we can end up reinforcing those messages. Yasmin gave as an example rumours of the imminent collapse of a bank, causing a ‘bank run’ which actually does cause the bank’s collapse (in the UK, an example was the September 2007 run on Northern Rock). She also recounted examples of the American broadcast media’s spin on world events, such as the beginning of the war in Iraq, and 9/11. People chose to tune into to those media outlets whose view of the world they preferred. (‘Oh honey, why do you watch those channels? It’s so much nicer on Fox News.’

There is so much data available out there, that a media channel can easily find provable facts and package them together to support its own interpretation of the world. This process of ‘cementation’ of the silos makes dialogue between opposed camps increasingly difficult – a discontinuity of contemporaneous worlds. This raises questions about the way our contextual filtering is evolving in the era of the cybersocial. And if we lose our ‘contextual compass’, interpreting the world becomes more problematic.

In Artificial Intelligence, there are embedded rules. How does this affect human agency in making judgements? One may try to inject some serendipity into the process – but serendipity, said Yasmin, is not that serendipitous.

Yasmin left us with some questions. Who controls the network, and who controls the message? Should we be sitting back, or are their ethical considerations that mean we should be actively worrying about these things and doing what we can? What is it ethical not to have known, when things go wrong?


Drew Mackie prepares network maps for organisations; most of the examples he would give are in the London area. He declared he would not be talking about network theory, although much is implied, and underlies what he would address.

Mostly, Drew and his associates work with community groups. What they seek to ‘map’ are locally available resources, which may themselves be community groups, or agencies. In this context, one way to find out ‘where stuff is’ is to consult some kind of catalogue, such as those which local authorities prepare. And a location map will show you where stuff is. But when it comes to a network map, what we try to find out and depict is who collaborates with whom, across a whole range of agencies, community groups, and key individuals.

When an organisation commissions a network map from Drew, they generally have a clear idea of what they want to do with it. They may want to know patterns of collaboration, what assets are shared, who the key influencers are, and it’s because they want to use that information to influence policy, or to form projects or programmes in that area.

Drew explained that the kinds of network map he would be talking about are more than just visual representations that can be analysed according to various metrics. They are also a kind of database: they hold huge amounts of data in the nodes and connections, about how people collaborate, what assets they hold, etc. So really, what we create is a combination of a database and a network map, and as he would demonstrate, software can help us maintain both aspects.

If you want to build such a network map, it is essentially to appoint a Map Manager to control it, update it, and also promote it. Unless you generate and maintain that awareness, in six months the map will be dead: people won’t understand it, or why it was created.

Residents in the area may be the beneficiaries, but we don’t expect them to interact with the map to any great extent. The main users will be one step up. To collect the information that goes into building the map, and to encourage people to support the project, you need people who act as community builders; Drew and his colleagues put quite a lot of effort in training such people.

To do this, they use two pieces of online software: sumApp, and Kumu. SumApp is the data collection program, into which you feed data from various sources, and it automatically builds you a network map through the agency of Kumu, the network visualisation and analytics tool. Data can be exported from either of these.

When people contribute their data to such a system, what they see online is the sumApp front end; they contribute data, then they get to see the generated network map. No-one has to do any drawing. SumApp can be left open as a permanent portal to the network map, so people can keep updating their data; and that’s important, because otherwise keeping a network map up to date is a nightmare (and probably won’t happen, if it’s left to an individual to do).

The information entered can be tagged with a date, and this allows a form of visualisation that shows how the network changes over time.

Drew then showed us how sumApp works, first demonstrating the management ‘dashboard’ through which we can monitor who are the participants, the number of emails sent, connections made and received, etc. So that we can experience that ourselves should we wish, Drew said he would see about inviting everyone present to join the demonstration map.

Data is gathered in through a survey form, which can be customised to the project’s purpose. To gather information about a participant’s connections, sumApp presents an array of ‘cards’, which you can scroll through or search, to identify those with whom you have a connection; and if you make a selection, a pop-up box enquires how frequently you interact with that person – in general, that correlates well with how closely you collaborate – and you can add a little story about why you connect. Generally that is in words, but sound and video clips can also be added.

Having got ‘data input’ out of the way, Drew showed us how the map can be explored. You can see a complete list of all the members of the map. If you were to view the whole map and all its connections, you would see an undecipherable mess; but by selecting a node member and choosing a command, you can for example fade back all but the immediate (first-degree) connections of one node (he chose our member Steve Dale as an example). Or, you could filter to see only those with a particular interest, or other attribute in common.

Drew also demonstrated that you can ask to see who else is connected to one person or institution via a second degree of connection – for example, those people connected to Steve via Conrad. This is a useful tool for organisations which are seeking to understand the whole mesh of organisations and other contacts round about them. Those who are keenest in using this are not policy people or managers, but people with one foot in the community, and the other foot in a management role. People such as children’s centre managers, or youth team leaders – people delivering a service locally, but who want to understand the broader ecology…

Kumu is easy to use, and Drew and colleagues have held training sessions for people about the broad principles, only for those people to go home and, that night, draw their own Kumu map in a couple of hours – not untypically including about 80 different organisations.

Drew also demonstrated a network map created for the Centre for Ageing Better (CFAB). With the help of Ipsos MORI, they had produced six ‘personas’ which could represent different kinds of older people. One purpose of that project was to see how support services might be better co-ordinated to help people as they get older. Because Drew also talked through this in the July 2016 NetIKX meeting, I shall not cover it again here.

Drew also showed an example created in Graph Commons ( This network visualisation software has a nice feature that lets you get a rapid overview of a map in terms of its clusters, highlighting the person or organisation who is most central within that cluster, aggregating clusters for viewing purposes into a single higher-level node, and letting you explore the links between the clusters. The developers of sumApp are planning a forthcoming feature that will let sumApp work with Graph Commons as an alternative graph engine to Kumu.

In closing, Drew suggested that as a table-group exercise we should discuss ideas for how these insights, techniques and tools might be useful in our own work situations; note these on a sheet of flip-chart paper; and then we could later compare the outputs across tables.

Conrad Taylor