Blog for May 2020: Gurteen knowledge cafe

How do we thrive in a hyper-connected, complex world?

An afternoon of conversation with David Gurteen

There was a great start to this Zoom meeting. David Gurteen gave some simple guidance to participants so we could all Zoom smoothly.  It was great best practice demo. We are all becoming good at Zoom but simple guidance on how to set the visuals, and mute the sound is a wise precaution to make sure we are all competent with the medium. He also set out how the seminar would be scheduled, with breakout groups and plenaries. It was to be just like a NetIKX seminar in the BDA meeting room, even though it was totally different! I felt we were in very safe hands, as David was an eary adopter of Zoom, but still recognizes that new people will benefit by clarity of what works best. Well done David.

The introduction set the scene for the content of our café.  We were looking at how we live in a hyper-connected complex rapidly evolving world. David outlined many dimensions to this connectedness, including transport changes, internet, social media, global finances…

In his view; over the last 75 years this increased connectivity has led to massive complexity, and today we can conceive of two worlds – an old world before the Second World War and a new world that has emerged since 1945.  Not only are our technological systems complex, but we human beings are immensely complex, non-rational, emotional creatures full of cognitive biases. This socio-technical complexity together with our human complexity has resulted in a world that is highly volatile, unpredictable, confusing, and ambiguous. Compare the world now, with the locally focused world that dominated the pre-war years.

Furthermore, this complexity is accelerating as we enter the fourth industrial revolution in which disruptive technologies and trends such as the Internet of Things, robotics, virtual reality, and artificial intelligence are rapidly changing the way we live and work. Our 20th-century ways of thinking about the world and our old command and control, hierarchical ways of working no longer serve us well in this complex environment.

Is it true that if we wish to thrive, we need to learn to see the world in a new light, think about it differently, and discover better ways in which to interact and work together?

Break out groups

With practiced expertise, David set us up into small break-out groups that discussed the talk so far.  Did we agree, or feel continuity was a stronger thread than change. Then we swapped groups to take the conversation on further.

Leadership

After the break-out groups, David looked at the two linked ideas behind Conversational Leadership.  He had some wonderful quotes about leadership.  Was the old control and lead model gone?  Do leaders have to hold a specific role, or can we all give leadership when the opportunity is there?  Of course, David provided examples of this, but perhaps after the seminar a very powerful example stands out – the 22 year old footballer changing the mind of a government with an 80 seat majority! You don’t need to have the expected ‘correct’ label to be a powerful leader.

Conversation

We also looked at the other element: talking underpins how we work together. Using old TV clips and quotes, David urged us to consider how we communicate with each other, and if there is scope to change the world through talking?  Again, there was plenty of food for thought as we consider new ideas such as ‘unconscious bias’, ‘media bubbles’, ‘fake news’ and the global reach of social media.

We then broke into small groups again, to take the conversation further, using David’s talk as a stimulus.

Plenary.

At the end of the break-out groups, we re-joined as a mass of faces smiling out of the screen, ready to share our thoughts.   It is a wonderful thing, when you make a point to see heads nodding across the Zoom squares.  I recommend this to anyone who has not tried it!!!

Some themes emerged from the many small group chats.  One was the question of the fundamental nature of change.  Was our world so different when the humans within it remain very much the same?  We looked very briefly at what we think human nature is and whether it remains a constant despite the massively different technology we use on a daily basis.   Even if humans are the same fallible clay, the many practical ways we can now communicate gives us much more potential to hear and be heard.

We also considered the role of trust. In our workplaces, trust often seems to be in short supply, but it is a key to leaders taking on authority without becoming authoritarian. The emphasis on blame culture and short-term advangabe has to be countered with building genuine trust.

Is there potential for self-governing teams? The idea sounds inviting but would not ensure good leadership or sharing of ideas.  The loudest voice might still monopolise attention. And with some justification, as not everyone wants to be pro-active. Some prefer to follow as their choice, and others like to take part but balk at the tedium of talking through every minute decision!  This idea may have potential, but we agreed it would not be a panacea.

We did agree that roles and rules could be positive to help give shape to our working lives, but that they need not constrict our options to lead when the time comes.  And we can see the leadership role that our professional calling suggests.   With so many new information channels, so many closed groups and so many conflicting pressures, as information or knowledge professionals, we can take a leadership role in helping and supporting our chosen groups of very human work colleagues to understand and thrive in this complex and evolving world. Conversational Leadership should be one of the tools we take away to enable our work with colleagues.

Final Notes:

The NetIKX team.

NetIKX is a community of interest based around Knowledge and Information Professionals. We run 6 seminars each year and the focus is always on top quality speakers and the opportunity to network with peers. We are delighted that the Lockdown has not stopped our seminars taking place and expect to take Zoom with us when we leave lockdown! You can find details of how to join the lively NetIKX community on our Members page.

Our Facilitator

David Gurteen is a writer, speaker, and conversational facilitator. The focus of his work is Conversational Leadership – a style of working where we appreciate the power of conversation and take a conversational approach to the way that we connect, relate, learn and work with each other.  He is the creator of the Knowledge Café – a conversational process to bring a group of people together to learn from each other, build relationships and make a better sense of a rapidly changing, complex, less predictable world. He has facilitated hundreds of Knowledge Cafés and workshops in over 30 countries around the world over the past 20 years. He is also the founder of the Gurteen Knowledge Community – a global network of over 20,000 people in 160 countries.  Currently, he is writing an online book on Conversational Leadership. You can join a Knowledge Café if you consult his website.

Blog for January 2020: Keeping the show on the road in a virtual world

Topical News!

Virtual meetings are pretty much the only meetings in town!  With Corvid19 rampaging through the UK, we all need to get skilled at the art of virtual meetings as a top priority. Now is the time to show your value as a KM professional by providing skills and knowledge in the virtual meeting space.  Read on for help with this now!

Introduction – the age of disruption

This is a time when we all learn to live with digital disruption.   Processes and procedures that had lasted year upon year are suddenly subject to brand new ways of doing things.  One of these changes has been to meetings.  We no longer need to travel to go to meet someone.  We have the potential to have a virtual meeting, where wonderful technology means that geography does not stop us sharing live documents and possibly even admiring each other’s outfits!

It is interesting to see this become widespread as the health risks of face to face meetings grow all around us.  Using remote meetings are going to be a regular event for many of us.  Let’s put in the effort to do them well.

We can go to meetings with all the information we need in the palm of our hands, via laptops or smart phones, leaving all those cumbersome files and bundles of paper behind.  This opens a new world of opportunity for knowledge sharing and knowledge transfer.  These changes have many advantages but, as always, it pays to think carefully about the disadvantages too, so that we can take steps to reduce them.  We still need to build confidence that we all understand digital opportunities fully and are certain to get the best from them.

This article reports back from a NetIKX, (Network for Information and Knowledge Exchange), seminar where we spent a full afternoon grappling with the issues linked to Virtual Meetings.

Speaker: Paul Corney

Anyone who has sat through a meeting where many people are intently studying their mobiles, will know the frustrations that can cause.   And virtual meetings are notorious for the problems that technology can introduce.  Paul Corney, the President elect of CILIP, and an author and speaker of repute, took us to the heart of the issues for Knowledge Managers.  If meetings are one way that we share knowledge, it is essential that we, working as we do to ensure the best possible sharing takes place, will be in the forefront of establishing good practice.   As a lively and engaging speaker, Paul at once convinced his audience that he would be able to take us through the minefield of virtual meetings and help us take power over the essentials of good practice.

The potential range

We considered all the possibilities for a wide range of meetings.  Paul has a wealth of experience – he had been line managed by someone on the other side of the globe, 9,000 miles away – clearly virtual meetings would be required and when it is your boss on the line, you don’t want to have any distractions messing up communication. He also highlighted examples from the recently published KM Cookbook, including the International Olympic Committee whose Knowledge Management programme began in Sidney in 2000 and where significant amounts of knowledge is now organised and transferred.  This can allow learning to be disseminated to wider groups than ever possible before. It also highlights a variety of issues such as organising subgroups and breakouts.

An example

We did not just talk about virtual meetings – we invited one of our members, who could not attend because they were sick, to join us online.  True to form, in one way it was wonderful.  Conrad was able to talk to the crowd in the room from his sickbed.  (Please don’t worry, he has recovered now).  He spoke about his experience of virtual meetings, including bemoaning, in his memorable phrase: ‘survival of the loudest’.  But the technology only delivered half its promise! We could only hear Conrad, as the visuals refused to work.  That made the experience less rich, although as a demonstration that technology can let you down, it was very apposite.   If Conrad had been invited for a long speech, this could have been a disaster, as it is much harder for an audience to concentrate when there are no visual clues to keep their attention.  As it was, we only suffered from not learning what colour pyjamas Conrad wears!

Video mishaps

Paul took us through a short masterclass, aided by a stunning slide set, looking at the benefits and pitfalls; the good, the bad and the just plain awkward.  One of the resources he introduced to us was a short video clip (A Conference Call in Real Life) which portrayed a virtual meeting as if it was a traditional face to face meeting.  This had the impact of presenting what we know can go wrong but made hilarious when acted out.  For example: the times when people were talking but the sound had gone or the strange situation of ‘Pete has joined the meeting’ intoned several times as Pete’s link drops and he has to keep getting back up and running.  And of course, the ‘lurker’ who was in the meeting all the time but did not let anyone know he was there!  Believe me, that is very funny when you see it!  It certainly did highlight all the possible jinxes we can meet when we try virtual meetings.

Resources

As Knowledge Management advocates, we understand the importance of the media when messages are to be transmitted and it is vital that we don’t reduce our effectiveness in our ability to share when we embrace the most forward-looking technology.  The video clip was just one of the valuable resources we looked at during the meeting.  Since the seminar, NetIKX has collected a small set of resources that can be used to help understanding the issues, and they are available through our website.

Audience input

One great resource of a NexIKX meeting is that the attendees are all participants who contribute their own learning from experience.  As a result, we could pool our ideas about the different technologies we had used and stories and anecdotes from actual meetings we had survived.  One example that I loved was the dry comment about an internal team meeting with a home worker: ‘the meeting didn’t go well, but at least we all saw her sitting room!’  It brings back memories of the famous incident of the newsreader whose children toddled and crawled into view while he was broadcasting! It is a useful reminder for all video link meetings that you need to consider ensuring you have an appropriate background setting…

Technology

Paul provided us with a table outlining the pros and cons of different meeting software.   It was particularly helpful to get the facts, augmented by the experience of people in the room.  Of course, there are different ‘best choice’ options depending on the type of meetings you intend to support and the available resources. One well-resourced organisation uses Microsoft Teams, which will control social media use through that device, while others use Zoom, a simpler choice, or Webex, the more traditional option.  (This very useful table is available on the NetIKX website).  Once your software is chosen, you need to ensure that there are no problems with users having different software versions, or incompatible systems and remembering that simply because they have the software, this does not mean they know how to use it effectively!

Security

 Of course, the best meetings have help and support from technology expertise; a strong reason for keeping good relations with our counterparts in the IT department!  Firewalls may have to be negotiated without leading to security risks.  It may be that in your eagerness to facilitate knowledge sharing you forget to consider the dangers of ‘leaking’.  There are many technical issues to negotiate to get the best possible solution to your virtual meeting needs.

Etiquette

And so, we come to the non-tech questions.  What differences do we have to manage with a virtual meeting compared to traditional meetings?  Do you need different rules? Will there be alternative ways to enforce them? Are there timing issues, or cultural issues and how do you get feedback to learn how well things worked and where you can improve?  One issue that we considered carefully was whether a good meeting chair would automatically be a good virtual meeting chair or if some different skills were needed.   A solution could be to have two chairs:  One to manage the meeting content and another to monitor and confirm Protocol.  This could solve all your problems – or possibly lead to utter confusion and conflict!

Paul suggested an interesting resource could be a book by Erin Mayer, which includes a chapter called ‘The most productive ways to disagree across cultures’, in ‘The Culture Map’.  He suggested the words: ‘that is really interesting’ which from an English person with a dry turn of phrase can have an idiomatic meaning contrary to its general meaning.

Takeaways

The meeting highlighted lots of useful ideas.  We then considered these in table discussions so the participants, (not including the virtual entrant – we let him retire early,) were able to pull together the ideas they had found useful.  NetIKX meetings always include a time for table discussions, so that people have a chance to embed the ideas in their own context and pick up ideas from networking with people from other workspaces.  In this case, we all considered what was the most useful tip from the meeting in our small groups.  We then amalgamated the ideas from all the groups into a main list and the voted for the best of all!  This was fun, and perhaps a little frustrating as the results were left for me to reveal in this article.  I will give our full list as they were all deemed useful.  Here are the TOP TEN in reverse order of popularity.

10.Consider security – don’t overlook this when tackling the technology issues.

9. Consider if the meeting needs to have small groups, or specific break-out groups.

8, Ensure the participants understand the established etiquette.

7. Ensure participants are confident and competent with the technology before the meeting starts.

6. Consider how the role of Chair will need to adapt to the virtual format.

5. Consider if you can build on face to face meetings to supplement the virtual ones.

4. Decide if you need to have two people taking lead roles: Chair of Content and Chair of Protocol?

Are you ready?  Drumroll please! Now for the top three:

3.Consider cultural issues as these may be emphasized and exacerbated by the virtual format

2. Preparation is vital: IT compatibility and time issues etc. need to be thought through.

Yes, in top place!

The recommendation that reminds us all that virtual meetings will ultimately have the same dynamic as any other meeting:

1. It is most essential to have a clear purpose and outcomes that are understood by all participants.

Conclusion

When the NetiKX meeting ended, the conversations did not.  Refreshments helped the chatter flow and we continued for a very satisfactory networking session with wine and soft drinks, finger food and chat.  All in all, it was a highly successful NetIKX meeting with a dazzling speaker and plenty of learning for all concerned.  I hope this summary of what went on has been useful for you.   If you want more here are three valuable resources:

Buy (or win) Paul Corney’s book:

Paul has a new book available to buy.  It is called: The KM Cookbook : Stories and strategies for organisations exploring Knowledge Management Standard ISO30401  By  Chris J. Collison , Paul J. Corney and  Patricia Lee Eng

NetIKX has two copies and is running competitions on their website for them. The first was won by one of our members who works with Plan International.  The next competition will be later in the Spring.  Watch out for this at www.netikx.org.uk. The book is published by CILIP

Website resources linked to this meeting

Each seminar has a page on our website where we collect resources relevant to that meeting.  However, this may be for members only. Look at the page for January 2020.  This includes up-to-date information on Zoom, Microsoft Teams etc.

Issues Checklist

For our members, we have compiled a simple checklist, bringing together all the ideas from the meeting.  It could be a useful starting point for thinking through the issues so that you have expertise in identifying how to prepare for the best possible virtual meetings.

To join NetIKX and so gain access to this material, please go to our website and use the joining form – or alternatively come to our next seminar. This will be a virtual meeting using Zoom and led by someone with considerable experience in running virtual meetings, David Gurteen.  Please look at our website for details.   Contact us via the website for an opportunity to attend as our guest to enjoy a chance to talk with our members, as a taster to see what NetIKX could offer you or your organisation.   We look forward to you joining us then.

This article is compiled by Lissi Corfield, based on the presentation by Paul Corney and the contribution of attendees at the NetIKX seminar in January 2020.

Blog for July 2019: Content strategy

The speakers for the NetIKX meeting i July 2019 were Rahel Baillie and Kate Kenyon.  Kate promised they would explain what Content Strategy is and what it isn’t and how it relates to the work of Knowledge Management Professionals.   The two speakers came at this work from different backgrounds.  Rahel deals with technical systems while Kate trained as a journalist and worked at the BBC.

Managing content is different from managing data.   Content has grammar and it means something to people.  Data such as a number in a field is less complex to manage.  This is important to keep in mind because businesses consistently make the mistake of trying to manage content as if it were data.  Content strategy is a plan for the design of content systems. A content system can be an organic, human thing.  It isn’t a piece of software, although it is frequently facilitated by software.  To put together a content strategy, you have to understand all the potential content you have at hand.  You want to create a system that gives repeatable and reliable results.  The system deals with who creates content, who checks it, who signs it off and where it is going to be delivered.   The system must govern the management of content throughout its entire lifecycle.  The headings Analyse, Collect, Manage and Deliver can be useful for this.

Kate pointed out that if you are the first person in your organisation to be asked to look at content strategy, you might find yourself working in all these areas, but in the long run, they should be delegated to the appropriate specialists who can follow the Content Strategy plan.  In brief, the first part of content strategy is to assess business need, express it in term of business outcomes and write a business case.   It is part of the job to get a decent budget for the work!  When you have a clear idea of what the business wants to achieve, the next question is – where are we now?  What should we look at?  You will need to audit current content and who is producing it and why.  Assess the roles of everyone in the content lifecycle – not just writers and editors but also who commissions, creates and manages it as well as who uploads it, and archives it.   Then look at the processes that enable this.  Benchmark against standards to see if the current system is ‘good enough’ for purpose.  Define the scope of your audit appropriately.  The audit is not a deliverable, though vital business information may emerge.  It is to help you see priorities by perhaps doing Gap Analysis.  Then create a requirements matrix which helps clarify what is top priority and what not.

From this produce a roadmap for change and each step of the way keep the business on side.  A document signed off by a Steering Committee is valuable to ensure the priorities are acknowledged by all!

The discussion that followed considered the work in relation to staff concerns.  For example people might be scared at the thought of change, or worried about their jobs.   It was great to have such experienced speakers to meet concerns that were raised.  The meeting ended with Kate demonstrating some of the positive outcomes that could be achieved for organisations.  There is huge potential for saving money and improving public facing content.

This is taken from a report by Conrad Taylor.  To see the full report, follow this link: Content Strategy

Blog for May 2019 – Information Literacy

Information Literacy

Account by Conrad Taylor of a NetIKX meeting with Stéphane Goldstein and Geoff Walton of the CILIP Information Literacy Group — 30 May 2019.

I must preface this account of the May 2019 NetIKX seminar with a confession: I really hate the term ‘information literacy’ and can hardly bear typing the phrase without quote marks around it! I shall explain at the end of this report why I think the term is wrong and has negative connotations and consequences. In conversation with our speakers, I found that they largely agree with me – but in English, the term has stuck. I promise I’ll hold off the quote marks until I get to my postscript…


Stéphabe Goldstein and Geoff Walton
Our two ILG speakers, Stéphane Goldstein (left) and Geoff Walton (right).

Stéphane Goldstein: How CILIP redefined Information Literacy

We had two excellent speakers, Stéphane Goldstein and Geoff Walton. Stéphane led by establishing the background and explaining CILIP’s revised definition of Information Literacy, and Geoff reported a couple of practical research projects with young people.

Stéphane is an independent research consultant, with a strong interest in information and digital literacy. He acknowledged that those are contested terms, and he’d address that. With Geoff he’s been actively involved in CILIP’s Information Literacy Group (henceforth, ‘ILG’), one of 20 or so CILIP Special Interest Groups. His role in ILG is as its advocacy and outreach officer.

First he would tell us how ILG has developed its approach, and produced a new definition of Information Literacy (IL) now backed by CILIP as a whole. Secondly, he would tell us about recent political developments promoting IL and associated ‘literacies’, as important societal and public policy issues.
CILIP definition document

The CILIP definition of Information Literacy, as published in 2018.

About 15 years ago, CILIP developed its first definition of Information Literacy. This focused on knowing when and why you need information, where to find it, how to evaluate it and use it, and how to communicate it in an ethical way. At that time, the definition of IL (a term which had originated in the USA) was strongly articulated around academic skills and the work of academic librarians, even though the ideas had potential relevance to many other contexts. Information literacy at this time was defined by CILIP in terms of skill-sets, such as in searching, interpretation, and information-handling.

That older definition still has quite a lot of traction, but CILIP found it necessary to move on. For one thing, it’s now thought relevant to look at IL in other workplace environments.

In 2016, ILG began redefining IL. There was a lengthy consultative process, in several stages, and a draft definition was presented at the Librarians and Information Literacy annual conference, LILAC, in Nottingham in 2017 (see https://www.lilacconference.com/lilac-archive/lilac-2017-1). The draft was formally endorsed by CILIP at the end of 2017, and the new definition officially launched at LILAC 2018.
Stéphane had a few printed copies of the document with him, but it is readily available in PDF form on the CILIP Web site.

Defining IL

So what is different about the new definition? It is more complex than the old one, because of the need to apply the concepts to more contexts and situations. To keep it manageable, it is split into four parts. The headline definition – ‘Information literacy is the ability to think critically and make balanced judgements about any information we find and use’ – is designed to be ‘bite-sized’, and then gets fleshed out and exemplified.

The second part of the document sets things in a broader context. It talks about how IL relates to other ‘literacies’ such as ‘digital literacy’ and ‘media literacy’; and points out that IL is not just a bag of skills, but also concerns how those skills are applied. It describes the competencies, the knowledge, the confidence in applying IL in different contexts. The core of the definition is then related to five principal contexts for IL.

Conrad asked if definitions of all these derived ‘literacies’ are predicated on some foundational definition of what ‘literacy’ means, and Stéphane said — No. Indeed over the years, one of the problems that IL practitioners have had in getting the idea across is that the use of the term ‘literacy’ tends to throw up blockages to communication and comprehension. ‘Basic literacy’ of course is widely understood to mean the ability to read and to write, and perhaps to perform basic arithmetic tasks. Stéphane has heard people say that to use the ‘literacy’ word in relation to information-handling might be experienced as pejorative or counter-productive, in effect labelling some people as ‘illiterate’.
In some other languages, the concepts behind IL are labelled without reference to literacy – in French, it is ‘information mastery’ (la métrise à l’information). In Germany, they speak of Informationskompetenz (‘information competence’). In the English-speaking world, IL is the term we are stuck with for historical reasons – it’s how the concept was labelled in the USA when it emerged in 1974.

Contexts of IL application

The new IL definition refer to five sorts of lifelong situations:

Everyday life
Citizenship
Education
Workplace
Health

Everyday life: CILIP says that IL has relevance to all of us, in many situations, throughout our lives. For example, it could be about knowing how to use the Internet for day-to-day transactions, such as online banking or shopping. It’s often assumed that people know how to do these things, but Stéphane reminded us that perhaps 20–25% of people lack confidence when faced with dealing with information online. Nor is there adequate access to training in these skills, either in schools or for adults.

Citizenship: These days we are beset with information of poor or dubious quality; misinformation and disinformation affect how we step up to our political responsibilities. There are IL skills involved in finding our way through the mazes of argument and alleged evidence in such matters as Brexit, climate change and so on. Judiciously picking through evidence and assertions are vital to the future of society – democratic societies in particular.

Education: This is the context where the original IL definition was at its strongest. Now we recognise it is important not just in Higher Education, but at all stages of the education cycle. ILG is concerned that school education does not teach these competencies adequately, but haphazardly, unless you are in the minority studying for the International Baccalaureat, or doing EPQs (Extended Project Qualifications, as Geoff would later explain).

If you lack prior experience in validating information, and bump into these issues for the first time at age 18 when you go to University (in the UK, about 40% of young people) — well, that’s rather too late, Stéphane thinks. There are also contexts for IL in lifelong education.

Workplace: In work settings, a lot of information needs to be dealt with, but it’s different from academic information. A lot of workplace information is vested in knowledge that colleagues have – also associates, even competitors. Working in teams presupposes an ability to exchange information effectively. Stéphane asked, how does IL skill contribute to a person’s employability?

Health: ‘Health literacy’ is increasingly important. With the NHS under pressure, people are expected to self-diagnose; but how you can find and evaluate credible information sources?
The CILIP team focused onto these five contexts as examples to keep the list manageable, but of course there are other contexts too.

The role of information professionals

The fourth and final part of the CILIP statement looks at the role ‘information professionals’ may have in helping to promote and teach and help citizens develop an understanding of IL. (In my postscript below I note that librarians tend to have a limited and one-sided notion of just who counts as an ‘information professional’.)

There have been savage cutbacks in the public library and school library sectors; and these environments are being deprofessionalised. What guiding role can be played by remaining qualified librarians, by library assistants, and by library volunteers? How can non-qualified library workers be helped to develop their appreciation of IL, to help them play a role as advocates? A definition framed around broader concepts might help school and public librarians in this task.

Stéphane thinks this redefinition is well timed, given contemporary concerns about the role in public life of disinformation. Fundamental democratic principles, which needs a level of trust between citizens and politicians and experts, are being undermined by discourses framed around flaky information. IL is one of the tools that can be of use here, though it is not the only tool.

In Stéphane’s view, the distinctions between information, digital, media literacy and so on are not that important. With digital literacy and media literacy in particular, there is a lot of overlap these days, as more media is delivered digitally. And we should admit that the term ‘information literacy’ has little currency in public discourse: it is used chiefly by librarians.
Recent illustrative developments in the policy sphere
CILIP definition document

A Parliamentary enquiry by the Digital Culture, Media and Sport Committee, looking into ‘Disinformation and “fake news”’, reported in February 2019 after 18 months of deliberation (read more and download it from here). Facebook in particular was subjected by the Committee to a great deal of criticism.

[Conrad notes — in contrast to the lambasting that Committee handed out to the social media platforms, very little was said about the disinformation and bias in the national British press and broadcasters…]

There is a chapter about ‘digital literacy’ in the Report, which says that ‘children, young adults and adults – all users if digital media – need to be equipped in general with sufficient digital literacy to be able to understand content on the Internet, and to work out what is accurate or trustworthy, and what is not.’

The Select Committee made a recommendation that ‘digital literacy should be the fourth pillar of education, alongside reading, reading and maths’. They called on the DCMS to co-ordinate with the Department for Education in highlighting proposals to include digital literacy as part of the physical, social, health and economic curriculum (PSHE). The Government, however, rejected this recommendation, claiming they are doing it already (they are not). CILIP went to talk to DfE in Spring 2019, and were told that there would be no review of the school curriculum before the next Parliament.

The Cairncross Review was commissioned to look at the future of the UK news industry, and reported at about the same time as the other DCMS committee. Amongst its observations were that online platforms’ handling of news should be placed under regulatory supervision, and it introduced the concept of ‘public interest news’. [Download the report]

That report uses the terms ‘media literacy’ and ‘critical literacy’ and echoes the Select Committee’s recommendations in calling for these skills to be promoted. It called of the Government to develop a ‘media literacy strategy’ and to identify gaps in provision and engage with all stakeholders. That recommendation has been adopted by government. This initiative came from the world of journalism, not the world of librarianship.

In April 2019 a White Paper on ‘Online Harms’ was published to initiate a consultation which has now closed (see https://www.gov.uk/government/consultations/online-harms-white-paper). The paper set out the government’s plans for a package of measures to keep UK users safe online, especially users defined as vulnerable.

The government uses a broad definition of what it means by ‘online harms’ — child sexual exploitation and abuse, terrorist content, content illegally uploaded from prisons, online sale of drugs, cyber-bullying, [encouraging] self-harm and suicide, under-age sharing of sexual imagery, and finally disinformation and manipulation. It also talks about online abuse of public figures.

Primarily the government’s White Paper aims to strengthen the regulatory environment, but it does have a nine-page sub-chapter on ways of empowering users. That section is mostly about education, and says, ‘Media and digital literacy can equip users with the skills they need to spot dangers online, to critically appraise information, and to take steps to keep themselves and others safe online. It also has wider benefits, including for the functioning of democracy, by giving users a better understanding of online content and enabling them to distinguish between facts and opinion.’

Like the Cairncross Review, the White Paper envisages the development of a National Media Literacy Strategy – which will probably take a while to evolve. It explicitly identifies librarians as partners in the development of that strategy – so perhaps CILIP’s approaches to government have not been in vain.

Stéphane expressed satisfaction that the White Paper recognises it as a serious problem when people are unable to distinguish between facts and opinions.
Measures on Health Literacy

At the end of 2017, NHS Health Education England and other bodies launched a Health Literacy Toolkit, ‘to help health staff tackle the challenges caused by low levels of health literacy, and improve health outcomes’. As the news release stated, ‘According to the Royal College of General Practitioners, health information is too complex for more than 60% of working age adults to understand, which means that they are unable to effectively understand and use health information.’ Interestingly, the toolkit aims to improve the ability of health professionals to explain health issues effectively. It was piloted in the East Midlands.

(Unfortunately, in a classic example of public sector ‘link rot’, all Health Education England URL references to the toolkit are dead-ends after just 20 months.)
IL considered in Europe

The European Commission also commissioned a report on fake news and disinformation, which offered important proposals in relation to media and information literacy. Some of the proposals are again in the educational realm, and it says ‘Media and information literacy has become an essential competence, as it is the starting point for developing critical thinking and good personal practice for discourse online, and also consequently in the offline world.’

The recommendations included better recognition of media and information literacy as key subjects for school curricula across Europe. The report also recommended that teacher training colleges incorporate media and information literacy as part of teachers’ own training and life-long learning.
Interventions in the UK (and Ireland)

Many organisations in the UK are showing an interest in IL, and the ILG has had dealings with many of them. Within DCMS there is the clumsily-named ‘Counter Online Manipulation of Security and Online Harms Directorate’ – a rather poorly resourced section, co-ordinating government policy on disinformation.

Then there is Ofcom, the communications regulator, part of its role being ‘to chart public perceptions and understanding of media literacy in the UK.’ Their understanding of media literacy is rather broad, and relates to the media environment.

The National Literacy Trust is a charity, chiefly interested in promoting basic literacy (reading and writing), but also has a set of resources on critical literacy, fake news and disinformation. You can read more and create a login to access free teaching resources, from here.

‘Newswise’ is a pilot project developed under the auspices of The Guardian newspaper, with Google funding. It helps to develop news literacy amongst primary school children. (See June 2018 Guardian article about Newswise.)

Ireland is interesting: the >Media Literacy Ireland initiative is backed by the Broadcasting Authority of Ireland (https://www.bemediasmart.ie/about) and it has brought some 40 key players together, including the national broadcaster RTÉ, the Press Council of Ireland, Sky, Facebook and Google, and the Library Association of Ireland. Stéphane thinks that it helps that Ireland’s population is only five million; in a country with 65m, it could be much more difficult.

Another challenge which Stéphane noted is, with many organisations showing an interest in these issues, how to ensure join-up and concerted activity? Internationally, UNESCO does some work in this area but it doesn’t have a lot of influence. The EU has shown an interest, but has no statutory powers in this area; its activity has been limited to research and reports.

Geoff Walton: improving information discernment

Geoff reported on research work funded by the Information Literacy Group and the British Academy. This work is interesting both for its cross-disciplinary approach and scientific components, and its trialling of practical educational interventions. The team included Jamie Barker at Loughborough University, Matt Pointon at Northumbria University, and Martin Turner and Andy Wilkinson from Staffordshire University. Geoff himself is a senior lecturer in the Department of Information and Communication at Manchester Metropolitan University.

Geoff reminded us again of the CILIP 2018 definition of Information Literacy as ‘the ability to think critically and make balanced judgements about any information we find and use’, which ‘empowers us as citizens to develop informed views and to engage fully with society.’ Geoff and colleagues focused on the bit about making informed judgements about information that was presented to the test subjects.

He also said that rather than using the term ‘balanced’, he’d prefer to say ‘well calibrated’. There seems to be a problem with the idea of balance — note how badly the BBC has misjudged the practical consequences of fetishising ‘balance’ between opposed views, notably on the Today programme, and Question Time.
The concept of information discernment

Some people are good at making well-calibrated judgements about information, and others do it poorly. The studies considered these as gradations in information discernment. The differences affect people’s responses when exposed to mis-information – emotionally, cognitively and even physiologically.
Hy-Brasil

Many maps showed the presumed location of the (mythical) island of ‘Hy-Brasil’ west or Ireland — though they often disagreed about just where it was!

The research explored this in a case study with young people 18–21 years of age. Further research with a cohort of young people aged 16–17 also found that there are ways in which we can help the ‘low discerners’ to become ‘high discerners’, with the right kind of training. Geoff would report on both.

Dis-information and mis-information are nothing new. Geoff referred to a persistent myth of the existence of a fog-shrouded island west of Ireland known as ‘Hy-Brasil’ (possibly from Irish Uí Breasail). Belief in this inspired several expeditions, and the island was marked on maps from 1325 onwards, even as late as 1865. Geoff also referred to the contemporary nutritionist Gillian McKeith, author of ‘You Are What You Eat’, and her string of bogus qualifications (see Wikipedia: the tale of how scientist Ben Goldacre obtained one of the same diplomas for his dead cat will amuse you).

Sometimes the issue is about how information is presented. Geoff referred to a BBC News report in the 1960s about a pay dispute at Monkwearmouth Pit near Sunderland, where his dad was a miner. The BBC claimed miners were taking home £20 a week. Geoff’s dad was furious at this deception: he was taking home just £15 a week. What the BBC had done was to take an average from the top to the bottom of the pay scale.

Characteristics of high information discerners — people who tend to exhibit high levels of information discernment also display these characteristics:

they are curious about the world

they are sceptical about information presented to them by search engines
they will consult more than one information source [deliberately doing so is sometimes called ‘triangulation’, with reference to the practices of land surveying]
They recognise that it is important to check the credentials of the author or other source
if they encounter contradictory information, they tend to consider it also, rather than ignore it
they display higher levels of attention to information, as shown in the eye-tracking elements of the research

In contrast, people who exhibit low levels of information discernment are significantly less likely to be aware of these issues, and are generally inattentive to the detail of information they encounter.
It’s in the blood

Geoff’s perceptions around these issues were widened by a chance conversation with some Manchester colleagues in the field of the psychology of sport and exercise. These researchers are interested in how people deal with stress, and they operate a model of challenge and threat. This then reminded Geoff of an information behaviour model proposed by Tom Wilson in the Journal of Documentation in 1999 (‘Models in information behaviour research’), in which he identifies a role for the psychological stress/coping mechanism. Out of this encounter, an interdisciplinary twist to the research was developed.

In a stressful situation, some people regard the stressor as a challenge and respond to it adaptively, but if you experience it as a threat, your response is likely to be maladaptive and unhelpful to yourself. As Geoff put it, standing before the NetIKX audience, he felt inspired to rise to the challenge, but if his response were maladaptive his throat might tighten, he might panic and fluff his lines.

Physiologically, any positive response to stress goes along with a measurable dilation (widening) of the blood vessels, especially the arteries and major veins, increasing blood flow, including of course to the brain. But in the case of maladaptive responses, the experience of threat results in the opposite, vasoconstriction, which restricts blood flow.

The research team therefore investigated whether there was a link between their ‘information discernment’ model, and measurable physiological reactions to misinformation.
The lab space

The research team set up a laboratory space with various instruments. One side was equipped with a ‘Finometer® PRO’, a non-invasive monitoring device which puts a small pressure cuff around the test subject’s finger and uses transmitted light to examine blood flow on a continuous beat-to-beat basis, reporting on blood pressure, pulse rate etc. (See http://www.finapres.com/Products/Finometer-PRO). The other side of the lab featured an eye-tracking system, which Geoff didn’t describe in detail, but he later showed us the kind of ‘attention heat map’ display it produces.

The team got their subjects to do various tasks. One task was a word-search which was actually impossible to complete satisfactorily (this reminds me of the famous Star Trek Kobiyashi Maru no-win scenario, a test of character). Obviously, not having a solution generates some mild stress. They also told the test subjects that they were helping another student, referred to as ‘the confederate’, to win £100 (this was a fib). Some participants were additionally told that the ‘confederate’ they were thus helping was a person of extreme religious views (again, not true, but a way of winding up the stress levels).

With the test completed, the test subjects were then taken through a PANAS self-reporting mood-assessing questionnaire (Positive And Negative Affect Schedule), and then the subjects were fully and honestly debriefed.

Results: There does seem to be a relationship between the level of information discernment and how subjects reacted to stress. Those whom the team classified as exhibiting high information discernment tended to react to stress as a ‘challenge’, rather than treating it like a threat to their well-being. They exhibited more efficient blood flow, and a healthier, better-adapted heart response. Also – this came as something of a surprise to the team – high information discernment individuals responded with more positive emotions in the PANAS assessment.
The information search task
Heatmap

Two of the heat maps generated by the eye-tracking equipment, showing how much attention different participants gave to different parts of the information display. That on the left was measured from someone the team considered to exhibit the characteristics of ‘high information discerner’, the one on the right, from a ‘low information discerner’.

Another task for the test subjects was to look at a page of information, made from text and data graphics sourced from The Guardian, headed ‘Religious extremism main cause of terrorism, according to report’ (it’s a rearrangement of material from a November 2014 article).

Geoff displayed a couple of ‘heat maps’ that were generated from the eye tracking monitor system, showing the extent to which different parts of the ‘page’ received the most attention from different participants: see image. The left example belonged to a ‘high discerner’, and the other to a ‘low discerner’. Admittedly, the two examples Geoff showed us were at the extremes of those observed.

Delving deeper, Geoff said that the ‘high discerner’s’ saccadic movements (this means, the sequential leaps between eye-fixation points) were measurably more ordered; this person also extensively examined the graphical components, which were completely ignored by the low discerner.

Conrad commented that there are particular skills involved in interpreting data graphics, which could make it a complicating factor in this study. It may be that the individual whose heat-map was displayed on the right does not have those skills, and therefore ignored the data graphics. Of course, lack of that skill might correlate with ‘low information discernment capability’ generally, and it doesn’t seem that unlikely, but it’s not a foregone conclusion.

From this part of the study, the research team concluded:

People with low information discernment capabilities are more likely to experience negative physical and emotional reactions [as tested by the Finometer and the PANAS review] when looking for information. This is exacerbated if the information they are presented with is contradictory.
‘Low discerners’ are less likely to attend to detail, and are unlikely to check the sources of information, or the credentials of the writer or other person telling them the information.

Can we help develop information discernment?

If it possible to move low information discerners to become high information discerners? The good news is, the team found that they could, and they offer four rules of thumb:

In such training, you need to match a person’s context as closely as you can, such as what people are typically doing and trying to find out. In other words, ‘generic’ forms of training won’t work.
The training needs to be informal and conversational.
The training should be collaborative.
It should be practical, not theoretical.

Geoff passed around some copies of materials which they prepared for the training workshop with a group of school students aged 16–17. He noted that similar results have also been recorded with first-year university students.

In this case, the subjects were given two pieces of information about smoking. One was a company Web site, the other was about research from a university. Trainees noticed things of which they were previously oblivious, such as citations and sources, and they developed an appreciation of the need to consider whether information sources are reliable.

This part of the work, funded by the British Academy, was aimed at helping a particular school deliver their Extended Project Qualification. The EPQ is an optional qualification within the National Qualifications Framework in England and Wales, taken after A-Levels, and involves producing a dissertation of about 5,000 words or some similar artefact, plus documentation showing research and thinking processes. (It is similar to the Scottish Sixth Year Studies qualification which was available from 1968 to 2000.) The EPQ assignment is probably the first time that school students have to do something where the information has not been handed to them on a plate – they have to find it out for themselves, and it is a good introduction to what they’ll meet at university.

Teachers involved in this workshop project noted a real shift in information behaviours as a result – no longer was there a passive acceptance of information, and students started to question the credibility of sources. The only problem, said Geoff, is that they can go too far and question everything, and you have to lead them back to an understanding that there are certain facts out there!

The project conclusion is that by training low information discerners, you can help them construct their own cognitive firewalls – which is better than relying on ‘machines of loving grace’ to protect us with filters and AI algorithms. We may hope that by encouraging people to have the cognitive habits and capabilities of high information discerners, they will be less susceptible to the unsubstantiated claims which permeate the Internet, and especially social media.

Stéphane and Geoff would love to work with other groups and constituencies to help mainstream Information Literacy, and would like to hear ideas for joint events and case studies.

Discussion summaries

After a break we assembled into table groups, and each table discussed a different aspect of the issue. Lissi Corfield then called on the groups to report back to a brief plenary session, and I report some of the points below:

On trusting information sources: one participant remarked that her window-cleaner had said that he got all his knowledge of what was going on in the world from Facebook. Initially that thought might inspire horror; but on the other hand, we’ve always trusted people we know as intermediates; is this so different?
IL in the workplace: the table considering this topic agreed that the tools and methods described by Geoff (and used in schools) could also find application in the workplace. But it would be lovely not to have to teach newcomers coming in from school and university about IL – they should have learned it already!
Logic and education: Another table, from which Dion Lindsay reported, thought that Logic, currently taught in the first year of university Philosophy courses, should have a place in secondary school education, as a foundation for thought. (I commented, ‘Bring back the mediaeval curriculum!’, in which Grammar, Logic and Rhetoric were the three foundational liberal arts subjects – the trivium.)
Self-awareness: a fourth table group commented that there is value in helping people to be more aware of, and critical of, their own thought processes.
Awareness of how search engines work – for example, why Google searches result in a particular ranking order of results – should help to foster a more critical approach to this information gateway.

PostScript: my discomforts with the ‘IL’ term
Origins of the ‘information literacy’ term
The Zurkowski document

Paul Zurkowsky’s 1974 paper — where the term ‘information literacy’ was coined.

It was from Geoff that I learned that the term ‘information literacy’ was coined by Paul G. Zurkowski in 1974, in a report for the [U.S.] National Commission on Libraries and Information Science.

Indeed, Geoff was so kind as to email me a copy of that paper, as a PDF of a scanned typescript — a section of the cover page is shown right. In the USA, the term seems to have been picked up by people involved in Library and Information Studies (LIS) and in educational technology.

I personally first encountered the IL term in January 2003, at a workshop hosted by the British Computer Society’s Developing Countries Specialist Group, in the run-up to the UN World Summit on the Information Society. The WSIS secretariat had been pushing the idea of ‘Information Literacy’. John Lindsay of Kingston University chaired the meeting.

A temple-like model of ‘Seven Pillars of Information Literacy’ had been dreamed up four years earlier by the Society of College, National and University Libraries (SCONUL), and two LIS lecturers from London Metropolitan University were promoting it. A full account of that workshop is available as PDF.>
SCONUL model of IL: seven pillars

SCONUL’s ‘Seven Pillars of Information Literacy’, redrawn by Conrad Taylor.

I took an immediate dislike to the term. Everyone agrees that true literacy, the ability to read and to write, is a fundamental skill and a gateway to learning; and this smelt to me like a sly attempt to imply that librarians, with their information retrieval and analysis skills, were as valuable to society as teachers.

In September 2003 I ran a conference called Explanatory and Instructional Graphics and Visual Information Literacy. My preparatory paper ‘New kinds of literacy, and the world of visual information’ is available from this link.

Also, proper literacy requires skills in consumption (reading) and production (writing), but in my experience librarians consistently don’t say much to recognise the skills used by those making information products: writers and illustrators, graphic designers, documentary film producers, encyclopaedists and so on.

There is a term ‘visual literacy’ as featured since 1989 in the journal of the same name and the International Visual Literacy Association, but I am prepared to cut that use of the L-word more slack because IVLA and the journal are concerned about design as well as consumption.

As for the implication that Stéphane referred to, that someone having difficulty in finding and making critical judgements about information is in effect being shamed as ‘illiterate’, I confess that objection hadn’t occurred to me.
Critique the object/system, not the hapless user?

In the field of Information and Communication Design, where I have been a practitioner for 30+ years, our professional approach to failures of communication or information discovery is not to default to blaming the reader/viewer/user, but to suspect that the information artefact or system might not have been designed very well. It’s just not on to react to such problems by claiming the public is ‘information illiterate’, when the problem is your illiterate inability to communicate clearly.

Some weeks later, while at the ISKO UK conference, it also occurred to me that people who work to improve the way information sources are organised, and work on improving how they are documented and accessed, share this attitude with Information Designers. They also do not accuse people of ‘illiteracy’ – they address the lack of coherence in the information. Is there something about librarian culture that resists deconstructing and critiquing information products and systems, and leaves them inviolate on a pedestal?

Anyway, I’ll draw my small tirade to a close! I do agree that there is A Problem, and I largely agree with Stéphane and Geoff about the shape of that problem. I shall continue to place quote marks around ‘information literacy’ and contest the use of the term. I like the German alternative that Stéphane mentioned, Informationskompetenz. I think I shall refer to ‘information handling competences’ in the plural, as my Preferred Term.

— Conrad Taylor, July 2019

Blog for March 2019 Seminar: Open Data

The speaker at the NetIKX seminar in March 2019 was David Penfold, a veteran of the world of electronic publishing who also participates in ISO committees on standards for graphics technology.  He has been a lecturer at the University of the Arts London and currently teaches Information Management in a publishing context.

David’s talk looked at the two aspects of Open Data.  The most important thing for us to recognise is Data as the foundation and validation of Information.  He gave a series of interesting historical examples and pointed out that closer to the present day, quantum theory, relativity and much besides all developed because the data that people were measuring did not fit the predictions that earlier theoretical frameworks suggested.  A principle of experimental science is that if the data from your experiments don’t fit the predictions of your theories, it is the theories which must be revisited and reformulated.

David talked about some classificatory approaches. He mentioned the idea of a triple, where you have an entity, plus a concept of property, plus a value.  This three-element method of defining things is essential to the implementation of Linked Data.  Unless you can stablish relationships between data elements, they remain meaningless, just bare words or numbers.  A number of methods have been used to associate data elements with each other and with meaning.  The Relational Database model is one.  Spreadsheets are based on another model and the Standard Generalised Markup Language (and subsequently XML) was an approach to giving structure to textual materials.  Finally, the Semantic Web and the Resource Description Framework have developed over the last two decades

Moving on to what it means for data to be Open.  There are various misconceptions around this – it does not mean Open Access, a term used within the worlds of librarian ship and publishing to mean free-of-charge access, mainly to academic journals and books.  We are also not talking about Open Archiving, which has a close relationship to the Open Access concept.  Much of the effort in Open Archiving goes into developing standardised metadata so that archives can be shared.  Open data is freely available.  It is often from government but could be from other bodies and networks and even private companies.

We then watched a short piece of video showing Sir Nigel Shadbolt, in 2012, who was a founder of the Open Data Institute, which set up the open data portals for the UK government.  He explains how government publication of open data, in the interests of transparency is not found in many countries and at national, regional and local level.  The benefits include improved accountability, better public services, improvement in public participation, improved efficiency, creation of social value and innovation value to companies.

We heard about examples of Open Data, for example Network Rail publishes open data and benefits through improvements in customer satisfaction.  It says that its open data generates technology related jobs around the rail sector and saves costs in information provision when their parties invest in building information apps based on that data.    The data is used by commercial users too, but also the rail industry and Network Rail itself.   The data can also be accessed by individuals and academia.

Ordnance Survey open data is important within the economy and in governance.  David uses one application in his role as Chair of the Parish Council in his local village. The data allows them to see Historic England data for their area, and Environment Agency information showing sites of special scientific importance or areas of outstanding natural beauty.

After the tea-break, David showed three clips from a video of a presentation by Tim Berners-Lee.  David then explained how the Semantic Web works.  It is based on four concepts: a) metadata; b) structural relationships; d) tagging; d) the Resource Description Framework method of coding which in turn is based on XML.

The Open Data Institute has developed an ‘ethics canvas’, which we looked at to decide what we thought about it.  It gives a list of fifteen issues which may be of ethical concern.  We discussed this in our table groups and this was followed by a general discussion.  There were plenty of examples raised from our collective experience, which made for a lively end to the seminar.

This is taken from a report by Conrad Taylor

To see the full report follow this link: Conradiator : NetIKX meeting report : Open Data

Blog for January 2019: Wikipedia & knowledge sharing

In January 2019, NetIKX held a seminar on the topic – Wikipedia and other knowledge-sharing experiences.  Andy Mabbett gave a talk about one of the largest global projects in knowledge gathering in the public sphere; Wikipedia and its sister projects.  Andy is an experienced editor of Wikipedia with more than a million edits to his name.  He worked in website management and always kept his eyes open for new developments on the Web.  When he heard about the Wikipedia project, founded in 2001, he searched there for information about the local nature reserves.  He is a keen bird-watcher.  There was nothing to be found and this inspired him to add his first few entries.  He has been a volunteer since 2003 and makes a modest living with part of his income stream coming from training and helping others become Wikipedia contributors too.  The volunteers are expected to write publicly accessible material, not create new information.  The sources can be as diverse and scattered as necessary, but Wikipedia pulls that information together coherently and give links back to the sources.

The Wikipedia Foundations which hosts Wikipedia says: ‘imagine a world in which every single human being can freely share in the sum of all knowledge.  That is our commitment.’

Wikipedia is the free encyclopaedia that anybody can edit.  It is built by a community of volunteers contributing bit by bit over time.  The content is freely licensed for anybody to re-use, under a ‘creative commons attribution share-alike’ licence.  You can take Wikipedia content and use it on your own website, even in commercial publications and all you have to do in return is to say where you got it from.  The copyright in the content remains the intellectual property of the people who have written it.

The Wikimedia Foundation is the organisation which hosts Wikipedia.  They keep the servers and the software running.  The Foundation does not manage the content.  It occasionally gets involved over legal issues for example, child protection but otherwise they don’t set editorial policy or get involved in editorial conflicts.  That is the domain of the community.

Guidelines and principles.

Wikipedia operates according to a number of principles called the ‘five pillars’.

  • It is an encyclopaedia which means that there are things that it isn’t: it’s not a soap box, nor a random collection of trivia, nor a directory.
  • It’s written from a neutral point of view, striving to reflect what the rest of the world says about something.
  • As explained, everything is published under a Creative Commons open license.
  • There is a strong ethic that contributors should treat each other with respect and civility. That is the aim, although Wikipedia isn’t a welcoming space for female contributors and women’s issues are not as well addressed as they should be.  There are collective efforts to tackle the imbalance.
  • Lastly there is a rule that there are no firm rules! Whatever rule or norm there is on Wikipedia, you can break it if there is a good reason to do so.  This does give rise to some interesting discussions about how much weight should be given to precedent and established practice or whether people should be allowed to go ahead and do new and innovative things.

In Wikipedia, all contributors are theoretically equal and hold each other to account. There is no editorial board, there are no senior editors who carry a right of overrule or veto.  ‘That doesn’t quit work in theory’ says Andy, ‘but like the flight of the bumblebee, it works in practice’.  For example, in September 2018, newspapers ran a story that the Tate Gallery had decided to stop writing biographies of artists for their Website.  They would use copies of Wikipedia articles instead.  The BBC does the same, with biographies of musicians and bands on their website and also with articles about species of animals.  The confidence of these institutions comes because it is recognised that Wikipedians are good at fact-checking and that if errors are spotted or assertions made without a supporting reliable reference they get flagged up.   But there are some unintended consequences too.  Because dedicated Wikipedians have the habit of checking articles for errors and deficits, Wikipedia can be a very unfriendly place for new and inexperienced editors.  A new article can get critical ‘flags to show something needs further attention.  People can get quite zealous about fighting conflicts of interest, or bias or pseudo-science.

For most people there is just one Wikipedia.  But there are nearly 300 Wikipedias in different languages.  Several have over a million articles, some only a few thousand. Some are written in a language threatened with extinction and they constitute the only place where a community of people is creating a website in that language, to help preserve it as much as to preserve the knowledge.

Wikipedia also has a number of ‘sister projects’.  These include:

  • Wiktionary is a multi-lingual dictionary and thesaurus.
  • Wikivoyage is a travel guide
  • Wikiversity has a number of learning models so you can teach yourself something.
  • Wikiquote is a compendium of notable and humorous quotations.

Probably the Wikidata project is the most important of the sister projects, in terms of the impact it is having and its rate of expansion.  Many Wikipedia articles have an ‘infobox’ on the right side.  These information boxes are machine readable as they have a microformat mark-up behind the scenes.  From this came the idea of gathering all this information centrally.  This makes it easier to share across different versions of Wikipedia and it means all the Wikipedias can be updated together, for example, if someone well known dies.  Under their open licence, data can be used by any other project in the world.  Using the Wikidata identifiers for millions of things, can help your system become more interoperable with others.   As a result, there is a huge asset of data including that taken from other bodies (for example English Heritage or chemistry databases etc.

Wikipedia has many more such projects that Andy explained to us and the information was a revelation to most of us.  So we were then delighted to spend some time looking at an exercise in small groups.  This featured two speakers who talked about the way they had used a shared Content Management system to gather and share knowledge.  These extra speakers circulated round the groups to help the discussions.  The format was different to NetiKX usual breakout groups but feedback from participants was very positive.

This blog is based on a report by Conrad Taylor.

To see the full report you can follow this link: Conradiator : NetIKX meeting report : Wikipedia & knowledge sharing

 

Blog for the November 2018 seminar: Networks

The rise of on-line social network platforms such as Facebook has made the general population more network-aware. Yet, at the same time, this obscures the many other ways in which network concepts and analysis can be of use. Network Science was billed as the topic for the November 2018 NetIKX seminar, and in hopes that we would explore the topic widely, I did some preliminary reading.

I find that Network Science is perhaps not so much a discipline in its own right, as an approach with application in many fields – analysis of natural and engineered geography, transport and communication, trade and manufacture, even dynamic systems in chemistry and biology. In essence, the approach models ‘distinct elements or actors represented by nodes (or vertices) and the connections between [them] as links (or edges)’ (Wikipedia), and has strong links to a branch of mathematics called Graph Theory, building on work by Euler in the 18th century.

In 2005, the US National Academy of Sciences was commissioned by the US Army to prepare a general report on the status of Network Science and its possible application to future war-fighting and security preparedness: the promise was, that if the approach looked valuable, the Army would put money into getting universities to study the field. The NAS report is available publicly at http://nap.edu/11516 and is worth a read. It groups the fields of application broadly into three: (a) geophysical and biological networks (e.g. river systems, food webs); (b) engineered networks (roads, electricity grid, the Internet); and (c) social networks and institutions.

I’ve prepared a one-page summary, ‘Network Science: some instances of networks and fields of complex dynamic interaction’, which also lists some further study resources, five books and an online movie. (Contact NetIKX if you want to see this). In that I also note: ‘We cannot consider the various types of network… to be independent of each other. Amazon relies on people ordering via the Internet, which relies on a telecomms network, and electronic financial transaction processing, all of which relies on the provision of electricity; their transport and delivery of goods relies on logistics services, therefore roads, marine cargo networks, ports, etc.’

The NetIKX seminar fell neatly into two halves. The first speaker, Professor Yasmin Merali of Hull University Business School, offered us a high-level theoretical view and the applications she laid emphasis on were those critical to business success and adaptation, and cybersecurity. Drew Mackie then provided a tighter focus on how social network research and ‘mapping’ can help to mobilise local community resources for social welfare provision.

Drew’s contribution was in some measure a reprise of the seminar he gave with David Wilcox in July 2016. Another NetIKX seminar which examined the related topics of graph databases and linked data graphs is that given by Dion Lindsay and Dave Clarke in January 2018.

Yasmin Merali noted that five years ago there wasn’t much talk about systems, but now it is commonplace for problems to be identified as ‘systemic’. Yet, ironically, Systems Thinking used to be very hot in the 1990s, later displaced by a fascination with computing technologies. Now once again we realise that we live in a very complex and increasingly unpredictable world of interactions at many levels; where the macro level has properties and behaviours that emerge from what happens at the micro level, without being consciously planned for or even anticipated. We need new analytical frameworks.

Our world is a Complex Adaptive System (CAS). It’s complex because of its many interconnected components, which influence and constrain and feed back upon each other. It is not deterministic like a machine, but more like a biological or ecological system. Complex Adaptive Systems are both stable (persistent) and malleable, with an ability to transform themselves in response to environmental pressures and stimuli – that is the ‘adaptive’ bit.

We have become highly attuned to the idea of networks through exposure to social media; the ideas of ‘gatekeepers’, popularity and influence in such a network are quite easy to understand. But this is selling short the potential of network analysis.

In successful, resilient systems, you will find a lot of diversity: many kinds of entity exist and interact within them. The links between entities in such systems are equally diverse. Links may persist, but they are not there for ever, nor is their nature static. This means the network can be ‘re-wired’, which makes adaptation easier.

Amazing non-linear effects can emerge from network organisation, and you can exploit this in two ways. If adverse phenomena are encountered, the network can implement a corrective feedback response very quickly (for example, to isolate part of the network, which is the correct public health response in the case of an epidemic). Or, if that reaction isn’t going to have the desired effect, we can try to re-wire the network, dampening some feedback loops, reinforcing others, and thus strengthening those ‘constellations’ of links which can best rise to the situation.

Information flows in the network. Yasmin offered us as analogy, the road network system, and distinct to that, the traffic running across that network. People writing about the power of social media have been concentrating on the network structure (the nodes, and the links), but not so much on the factors which enable or inhibit different kinds of dynamic within that structure.

Networks can enable efficient utilisation of distributed resources. We can also see networks as the locus where options are generated. Each change in a network brings about new conditions. But the generative capacity does come at a cost: you must allow sufficient diversity. Even if there are elements which don’t seem useful right now, there is a value in having redundant components: that’s how you get resilience.

You might extend network thinking outwards, beyond networking within one organisation, towards a number of organisations co-operating or competing with each other. Some of your potential partners can do better in the current system and with their resources than you; in another set of circumstances, it might be you who can do better. If we can co-operate, each tackling the risks we are best able to cope with, we can spread the overall risk and increase the capability pool.

Yasmin referred to the idea of ‘Six Degrees of Separation’ – that through intermediate connections, each of us is just six link-steps away from anybody else. The idea was important in the development of social network theory, but it turns out to have severe limitations, because where links are very tenuous, the degree of access or influence they imply can be illusory. That’s why simplistic social network graphs can be deceptive.

In a regular ‘small worlds’ network, everyone is connected to the same number of people in some organised way, and even one extra random link shortens the path length. It’s possible to ‘re-wire’ a network to get more of these small-world effects, with the benefit of making very quick transitions possible.

But there is another kind of network, similar in structure to the Internet and most of the biological systems we might consider – and that’s what we can call the ‘scale-free’ network. In this case, there is no cut-off limit to how large, or how well-connected a node can be.

Networks are also ‘lumpy’ – in large networks, there are very large hubs, but also adjacent less-prominent hubs, which in an Internet scenario are less likely to be attacked or degraded. This gives some hope that the system as a whole is less likely to be brought to its knees by a random attack; but a well-targeted attack against the larger hubs can indeed inflict a great deal of damage. This is something that concerns security-minded designers of networks for business. It is strategically imperative to have good intelligence about what is going on in a networked system – what are the entities, which of them are connected, and what is the nature of those connections and the information flows between them.

It’s important to distinguish between resilience and robustness. Resilience often comes from having network resources in place which may be redundant, may appear to be superfluous or of marginal value, but they provide a broader option space and a better ability to adapt to changing circumstance.

Looking more specifically at social networks, Yasmin referred to the ‘birds of a feather flock together’ principle, where people are clustered and linked based on similar values, aspirations, interests, ways of thinking etc. Networks like this are often efficient and fast to react, and much networking in business operates along those lines. However, within such a network, you are unlikely to encounter new, possibly valuable alternative knowledge and ways of thinking.

Heterogeneity of linkages may propagate along weaker links, but are valuable for expanding the knowledge pool. Expanded linkages may operate along the ‘six degrees’ principle, and through intermediate friends-of-friends, who serve both as transmitters and as filters. And yet a trend has been observed for social network engines (such as Facebook) to create a superdominance of ‘birds of a feather’ types of linkages, leading to confirmation bias and even polarisation.

In traditional ‘embodied’ social networks, people bonded and transacted with others whom they knew in relatively persistent ways, and could assess through an extended series of interactions in a broadly understandable context. In the modern cybersocial network, this is more difficult to re-create, because interactions occur through ‘shallow’ forms such as text and image – information is the main currency – and often between people who do not really know each other.

Another problem is the increased speed of information transfer, and decreased threshold of time for critical thought. Decent journalism has been one of the casualties. Yes, ‘citizen journalism’ via tweet or online video post can provide useful information – such informants can often go where the traditional correspondent could not – but verification becomes problematic, as does getting the broader picture, when competition between news channels to be first with the breaking story ‘trumps’ accuracy and broader context.

If we think of cybersocial networks as information networks, carrying information and meaning, things become interesting. Complexity comes not just from the arrangement of links and nodes, but also from the multiple versions of information, and whether a ‘message’ means the same to each person who receives it: there may be multiple frameworks of representation and understanding standing between you and the origin of the information.

This has ethical implications. Some people say that the Internet has pushed us into a new space. Yasmin argues that many of the issues are those we had before, only now more intensely. If we think about the ‘gig economy’, where labour value is extracted but workers have scant rights – or if we think about the ownership of data and the rights to use it, or surveillance culture – these issues have always been around. True, those problems are now being magnified, but maybe that cloud has a silver lining in forcing legislators to start thinking about how to control matters. Or is it the case that the new technologies of interaction have embedded themselves at such a fundamental level that we cannot shift them?

What worries Yasmin more are issues around Big Data. As we store increasingly large, increasingly granular data about people from sources such as fitbits, GPS trackers, Internet-of-Things devices, online searches… we may have more data, but are we better informed? Connectivity is said to be communication, but do we understand what is being said? The complexity of the data brings new challenges for ethics – often, you don’t know where it comes from, what was the quality of the instrumentation, and how to interpret the data sets.

And then there is artificial intelligence. The early dream was that AI would augment human capability, not displace it. In practice, it looks as if AI applications do have the potential to obliterate human agency. Historically, our frameworks for how to be in the world, how to understand it, were derived from our physical and social environment. Because our direct access to the physical world and the raw data derived from it is compromised, replaced by other people’s representation of other people’s possible worlds, we need to figure out whose ‘news’ we can trust.

When we act in response to the aggregated views of others, and messages filtered through the media, we can end up reinforcing those messages. Yasmin gave as an example rumours of the imminent collapse of a bank, causing a ‘bank run’ which actually does cause the bank’s collapse (in the UK, an example was the September 2007 run on Northern Rock). She also recounted examples of the American broadcast media’s spin on world events, such as the beginning of the war in Iraq, and 9/11. People chose to tune into to those media outlets whose view of the world they preferred. (‘Oh honey, why do you watch those channels? It’s so much nicer on Fox News.’

There is so much data available out there, that a media channel can easily find provable facts and package them together to support its own interpretation of the world. This process of ‘cementation’ of the silos makes dialogue between opposed camps increasingly difficult – a discontinuity of contemporaneous worlds. This raises questions about the way our contextual filtering is evolving in the era of the cybersocial. And if we lose our ‘contextual compass’, interpreting the world becomes more problematic.

In Artificial Intelligence, there are embedded rules. How does this affect human agency in making judgements? One may try to inject some serendipity into the process – but serendipity, said Yasmin, is not that serendipitous.

Yasmin left us with some questions. Who controls the network, and who controls the message? Should we be sitting back, or are their ethical considerations that mean we should be actively worrying about these things and doing what we can? What is it ethical not to have known, when things go wrong?

 

Drew Mackie prepares network maps for organisations; most of the examples he would give are in the London area. He declared he would not be talking about network theory, although much is implied, and underlies what he would address.

Mostly, Drew and his associates work with community groups. What they seek to ‘map’ are locally available resources, which may themselves be community groups, or agencies. In this context, one way to find out ‘where stuff is’ is to consult some kind of catalogue, such as those which local authorities prepare. And a location map will show you where stuff is. But when it comes to a network map, what we try to find out and depict is who collaborates with whom, across a whole range of agencies, community groups, and key individuals.

When an organisation commissions a network map from Drew, they generally have a clear idea of what they want to do with it. They may want to know patterns of collaboration, what assets are shared, who the key influencers are, and it’s because they want to use that information to influence policy, or to form projects or programmes in that area.

Drew explained that the kinds of network map he would be talking about are more than just visual representations that can be analysed according to various metrics. They are also a kind of database: they hold huge amounts of data in the nodes and connections, about how people collaborate, what assets they hold, etc. So really, what we create is a combination of a database and a network map, and as he would demonstrate, software can help us maintain both aspects.

If you want to build such a network map, it is essentially to appoint a Map Manager to control it, update it, and also promote it. Unless you generate and maintain that awareness, in six months the map will be dead: people won’t understand it, or why it was created.

Residents in the area may be the beneficiaries, but we don’t expect them to interact with the map to any great extent. The main users will be one step up. To collect the information that goes into building the map, and to encourage people to support the project, you need people who act as community builders; Drew and his colleagues put quite a lot of effort in training such people.

To do this, they use two pieces of online software: sumApp, and Kumu. SumApp is the data collection program, into which you feed data from various sources, and it automatically builds you a network map through the agency of Kumu, the network visualisation and analytics tool. Data can be exported from either of these.

When people contribute their data to such a system, what they see online is the sumApp front end; they contribute data, then they get to see the generated network map. No-one has to do any drawing. SumApp can be left open as a permanent portal to the network map, so people can keep updating their data; and that’s important, because otherwise keeping a network map up to date is a nightmare (and probably won’t happen, if it’s left to an individual to do).

The information entered can be tagged with a date, and this allows a form of visualisation that shows how the network changes over time.

Drew then showed us how sumApp works, first demonstrating the management ‘dashboard’ through which we can monitor who are the participants, the number of emails sent, connections made and received, etc. So that we can experience that ourselves should we wish, Drew said he would see about inviting everyone present to join the demonstration map.

Data is gathered in through a survey form, which can be customised to the project’s purpose. To gather information about a participant’s connections, sumApp presents an array of ‘cards’, which you can scroll through or search, to identify those with whom you have a connection; and if you make a selection, a pop-up box enquires how frequently you interact with that person – in general, that correlates well with how closely you collaborate – and you can add a little story about why you connect. Generally that is in words, but sound and video clips can also be added.

Having got ‘data input’ out of the way, Drew showed us how the map can be explored. You can see a complete list of all the members of the map. If you were to view the whole map and all its connections, you would see an undecipherable mess; but by selecting a node member and choosing a command, you can for example fade back all but the immediate (first-degree) connections of one node (he chose our member Steve Dale as an example). Or, you could filter to see only those with a particular interest, or other attribute in common.

Drew also demonstrated that you can ask to see who else is connected to one person or institution via a second degree of connection – for example, those people connected to Steve via Conrad. This is a useful tool for organisations which are seeking to understand the whole mesh of organisations and other contacts round about them. Those who are keenest in using this are not policy people or managers, but people with one foot in the community, and the other foot in a management role. People such as children’s centre managers, or youth team leaders – people delivering a service locally, but who want to understand the broader ecology…

Kumu is easy to use, and Drew and colleagues have held training sessions for people about the broad principles, only for those people to go home and, that night, draw their own Kumu map in a couple of hours – not untypically including about 80 different organisations.

Drew also demonstrated a network map created for the Centre for Ageing Better (CFAB). With the help of Ipsos MORI, they had produced six ‘personas’ which could represent different kinds of older people. One purpose of that project was to see how support services might be better co-ordinated to help people as they get older. Because Drew also talked through this in the July 2016 NetIKX meeting, I shall not cover it again here.

Drew also showed an example created in Graph Commons (https://graphcommons.com/). This network visualisation software has a nice feature that lets you get a rapid overview of a map in terms of its clusters, highlighting the person or organisation who is most central within that cluster, aggregating clusters for viewing purposes into a single higher-level node, and letting you explore the links between the clusters. The developers of sumApp are planning a forthcoming feature that will let sumApp work with Graph Commons as an alternative graph engine to Kumu.

In closing, Drew suggested that as a table-group exercise we should discuss ideas for how these insights, techniques and tools might be useful in our own work situations; note these on a sheet of flip-chart paper; and then we could later compare the outputs across tables.

Conrad Taylor

Taxonomy Bootcamp

Taxonomy Bootcamp is happening again on the 16-17th October.

As a partnership organisation, NetIKX is able to offer members a 25% discount, the code for which has been sent to all members. If you are a member and have not received this, please email info[at]netikx.org.uk.

 Key features:

  • Essential tips you can start applying right away to managing your taxonomy
  • New approaches to dealing with common issues such as getting business buy-in, and governance
  • Latest applications of taxonomies including NLP, semantics and machine learning
  • How to make the most of cutting-edge technologies and industry-leading software

 For more details and to register, go to:

http://www.taxonomybootcamp.com/London/2018/default.aspx

Blog for the September 2018 Seminar: Ontology is cool!

Our first speaker, Helen Lippell, is a freelance taxonomist and is an organiser of the annual Taxonomy Boot Camp in London.  She also works with organisation on constructing thesauri, ontologies and link data repositories.   As far as she is concerned, the point of ontology construction is to model the world to help meet business objectives, and that’s the practical angle from which she approached the topic.  Taxonomies and ontologies are strongly related.  Taxonomies are concerned with the relationships between the terms used in a domain, ontologies focus more on describing the things within the domain and the relationships between them.  Neither is inherently better: you choose what is appropriate for your business need.  An ontology offers greater capabilities and a gateway to machine reasoning, but if you don’t need those, the extra effort will not be worth it.  A taxonomy can provide the controlled vocabularies which help with navigation and search.

Using fascinating examples, Helen, listed a number of business scenarios in which ontologies can be helpful: information retrieval, classification, tagging and data manipulation.  She is doing a lot of work currently on an ontology that will help in content aggregation and filtering, automating a lot of processes that are currently manual.

Implementing an ontology project is not trivial.  It starts with a process of thoroughly understanding and modelling everything connected to the particular domain in which the project and business operate.  Information professionals are well suited to link between the people with technical skills and others who know the business better and can advocate for the end-users of these systems.

Finally, Helen discussed the software that can facilitate this work, both free and to be purchased.  Her talk was followed by an exercise where we produced our own model, with plenty of help and advice from the speakers. We looked at problems in London that we could help solve such as guiding visitors to London or a five-year ecology plan.  It was fun, although we were not quite up to achieving a high-quality product ready to change the world!

In the second part of the meeting, we heard from Silver Oliver, an information architect.  Again, there was a short talk and then a practical exercise.   We learnt that Domain Modelling is fundamental to compiling successful taxonomies, controlled vocabularies and classifications schemes, as well as formal ontologies.  When you set out to model a domain, it is beneficial to engage as many voices and perspectives as possible.   It is helpful to do this before you start exploring tools and implementations so that you don’t exclude people from being able to participate with their different views and perspectives.   The exercise that followed looked at creating a website focusing on food and recipes, which was a pleasant topic to work on in our small groups.

The seminar finished with a set of recommendations:

  • Don’t dive into software: start with whiteboards.
  • Don’t work alone data modelling in the corner. Domain modelling is all about understanding he domain, through conversation and building shared language.
  • Be wary of getting inspiration from other models you believe to be similar. Start with conversations instead – though stealing ideas ca be useful!
  • Rather than ‘working closed’ and revealing your results at the end – keep the processes open and show people what you are doing.
  • An evolving ontology of the domain is a good way to capture these discussions and agreements about what things mean.
  • Rather than evolving a humongous monolithic domain model which is hard to get your head around, work with smaller domains with bounded contexts.

That led to a break with refreshments and general conversations based on our experiences during the afternoon.

Extract from a report by Conrad Taylor.

If you want to read the full account of this seminar – follow this link:

https://www.conradiator.com/kidmm/netikx-ontology-domains-sept2018.html

Blog for the July 2018 seminar: Machines and Morality: Can AI be Ethical?

In discussions of AI, one issue that is often raised is that of the ‘black box’ problem, where we cannot know how a machine system comes to its decisions and recommendations. That is particularly true of the class of self-training ‘deep machine learning’ systems which have been making the headlines in recent medical research.

Dr Tamara Ansons has a background in Cognitive Psychology and works for Ipsos MORI, applying academic research, principally from psychology, to various client-serving projects. In her PhD work, she looked at memory and how it influences decision-making; in the course of that, she investigated neural networks, as a form of representation for how memory stores and uses information.

At our NetIKX seminar for July 2018, she observed that ‘Artificial Intelligence’ is being used across a range of purposes that affect our lives, from mundane to highly significant. Recently, she thinks, the technology has been developing so fast that we have not been stepping back enough to think about the implications properly.

Tamara displayed an amusing image, an array of small photos of round light-brown objects, each one marked with three dark patches. Some were photos of chihuahua puppies, and the others were muffins with three raisins on top! People can easily distinguish between a dog and a muffin, a raisin and an eye or doggy nose. But for a computing system, such tasks are fairly difficult. Given the discrepancy in capability, how confident should we feel about handing over decisions with moral consequences to these machines?

Tamara stated that the ideas behind neural networks have emerged from cognitive psychology, from a belief that how we learn and understand information is through a network of interconnected concepts. She illustrated this with diagrams in which one concept, ‘dog’, was connected to others such as ‘tail’, ‘has fur’, ‘barks’ [but note, there are dogs without fur and dogs that don’t bark]. From a ‘connectionist’ view, our understanding of what a dog is, is based around these features of identity, and how they are represented in our cognitive system. In cognitive psychology, there is a debate between this view and a ‘symbolist’ interpretation, which says that we don’t necessarily abstract from finer feature details, but process information more as a whole.

This connectionist model of mental activity, said Tamara, can be useful in approaching some specialist tasks. Suppose you are developing skill at a task that presents itself to you frequently – putting a tyre on a wheel, gutting fish, sewing a hem, planning wood. We can think of the cognitive system as having component elements that, with practice and through re-inforcement, become more strongly associated with each other, such that one becomes better at doing that task.

Humans tend to have a fairly good task-specific ability. We learn new tasks well, and our performance improves with practice. But does this encapsulate what it means to be intelligent? Human intelligence is not just characterised by ability to do certain tasks well. Tamara argued that what makes humans unique is our adaptability, the ability to learnings from one context and applying them imaginatively to another. And humans don’t have to learn something over many, many trials. We can learn from a single significant event.

An algorithm is a set of rules which specify how certain bits of information are combined in a stepwise process. As an example, Tamara suggested a recipe for baking a cake.

Many algorithms can be represented with a kind of node-link diagram that on one side specifies the inputs, and on the other side the outputs, with intermediate steps between to move from input to output. The output is a weighted aggregate of the information that went into the algorithm.

When we talk about ‘learning’ in the context of such a system – ‘machine learning’ is a common phrase – a feedback or evaluation loop assesses how successful the algorithms are at matching input to acceptable decision; and the system must be able to modify its algorithms to achieve better matches.

Tamara suggests that at a basic level, we must recognise that humans are the ones feeding training data to the neural network system – texts, images, audio etc. The implication is that the accuracy of machine learning is only as good as the data you give it. If all the ‘dog’ pictures we give it are of Jack Russell terriers, it’s going to struggle at identifying a Labrador as a dog. We should also think about the people who develop these systems – they are hardly a model of diversity, and women and ethnic minorities are under-represented. The cognitive biases of the developer community can influence how machine learning systems are trained, what classifications they are asked to apply, and therefore how they work.

If the system is doing something fairly trivial, such as guessing what word you meant to type when you make a keyboarding mistake, there isn’t much to worry about. But what if the system is deciding whether and on what terms to give us insurance, or a bank loan or mortgage? It is critically important that we know how these systems have been developed, and by whom, to ensure that there are no unfair biases at work.

Tamara said that an ‘AI’ system develops its understanding of the world from the explicit input with which it is fed. She suggested that in contrast, humans make decisions, and act, on the basis of myriad influences of which we are not always aware, and often can’t formulate or quantify. Therefore it is unrealistic, she suggests, to expect an AI to achieve a human subtlety and balance in its .

However, there have been some very promising results using AI in certain decision-making contexts, for example, in detecting certain kinds of disease. In some of these applications, it can be argued that the AI system can sidestep the biases, especially the attentional biases, of humans. But there are also cases where companies have allowed algorithms to act in highly inappropriate and insensitive ways towards individuals.

But perhaps the really big issue is that we really don’t understand what is happening inside these networks – certainly, the really ‘deep learning’ networks where the hidden inner layers shift towards a degree of inner complexity which it is beyond our powers to comprehend. This is an aspect which Stephanie would address.
Stephanie Mathieson is the policy manager at ‘Sense About Science’, a small independent campaigning charity based in London. SAS was set up in 2002 as the media was struggling to cope with science-based topics such as genetic modification in farming, and the alleged link between the MMR vaccine and autism.

SAS works with researchers to help them to communicate better with the public, and has published a number of accessible topic guides, such as ‘Making Sense of Nuclear’, ‘Making Sense of Allergies’ and other titles on forensic genetics, chemical stories in the press, radiation, drug safety etc. They also run a campaign called ‘Ask For Evidence’, equipping people to ask questions about ‘scientific’ claims, perhaps by a politician asking for your vote, or a company for your custom.

But Stephanie’s main focus is around their Evidence In Policy work, examining the role of scientific evidence in government policy formation. A recent SAS report surveyed how transparent twelve government departments are about their use of evidence. The focus is not about the quality of evidence, nor the appropriateness of policies, just on being clear what evidence was taken into account in making those decisions, and how. In talking about the use of Artificial Intelligence in decision support, ‘meaningful transparency’ would be the main concern she would raise.

Sense About Science’s work on algorithms started a couple of years ago, following a lecture by Cory Doctorow, the author of the blog Boing Boing, which raised the question of ‘black box’ decision making in people’s lives. Around the same time, similar concerns were being raised by by the independent investigative newsroom ‘ProPublica’, and Cathy O’Neil’s book ‘Weapons of Math Destruction’. The director of Sense About Science urged Stephanie to read that book, and she heartily recommends it.

There are many parliamentary committees which scrutinise the work of government. The House of Commons Science and Technology Committee has an unusually broad remit. They put out an open call to the public, asking for suggestions for enquiry topics, and Stephanie wrote to suggest the role of algorithms in decision-making. Together with seven or eight others, Stephanie was invited to come and give a presentation, and she persuaded the Committee to launch an enquiry on the issue.

The SciTech Committee’s work was disrupted by the 2016 snap general election, but they pursued the topic, and reported in May 2018. (See https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news-parliament-2017/algorithms-in-decision-making-report-published-17-19-/)

Stephanie then treated us to a version of the ‘pitch’ which she gave to the Committee.

An algorithm is really no more than a set of steps carried out sequentially to give a desired outcome. A cooking recipe, directions for how to get to a place, are everyday examples. Algorithms are everywhere, many implemented by machines, whether controlling the operation of a cash machine or placing your phone call. Algorithms are also behind the analysis of huge amounts of data, carrying out tasks that would be beyond the capacity of humans, efficiently and cheaply, and bringing a great deal of benefit to us. They are generally considered to be objective and impartial.

But in reality, there are troubling issues with algorithms. Quite rapidly, and without debate, they have been engaged to make important decisions about our lives. Such a decision would in the past have been made by a human, and though that person might be following a formulaic procedure, at least you can ask a person to explain what they are doing. What is different about computer algorithms is their potential complexity and ability to be applied at scale; which means, if there are biases ingrained in the algorithm, or in the data selected for them to process, those shortcomings will also be applied at scale, blindly, and inscrutably.

  • In education, algorithms have been used to rank teachers, and in some cases, to summarily sack the ‘lower-performing’ ones.
  • Algorithms generate sentencing guidelines in the criminal justice system, where analysis has found that they are stacked against black people.
  • Algorithms are used to determine credit scores, which in turn determine whether you get a loan, a mortgage, a credit card, even a job.
  • There are companies offering to create a credit score for people who don’t have a credit history, by using ‘proxy data’. They do deep data mining, investigate how people use social media, how they buy stuff online, and other evidences.
  • The adverts you get to see on Google and Facebook are determined through a huge algorithmic trading market.
  • For people working for Uber or Deliveroo, their bosses essentially are algorithms.
  • Algorithms help the Government Digital Service to decide what pages to display on the gov.uk Web site. The significance is, that site is the government’s interface with the public, especially now that individual departments have lost their own Web sites.
  • A recent Government Office for Science report suggests that government is very keen to increase its use of algorithms and Big Data – it calls them ‘data science techniques’ – in deploying resources for health, social care and the emergency services. Algorithms are being used in the fire service to determine which fire stations might be closed.

In China, the government is developing a comprehensive ‘social credit’ system – in truth, a kind of state-run reputation ranking system – where citizens will get merits or demerits for various behaviours. Living in a modestly-sized apartment might add points to your score; paying bills late or posting negative comments online would be penalised. Your score would then determine what resources you will have access to. For example, anyone defaulting on a court-ordered fine will not be allowed to buy first-class rail tickets, or to travel by air, or take a package holiday. That scheme is already in pilots now, and is supposed to be fully rolled out as early as 2020.

(See Wikipedia article at https://en.wikipedia.org/wiki/Social_Credit_System and Wired article at https://www.wired.co.uk/article/china-social-credit.)

Stephanie suggested a closer look at the use of algorithms to rank teacher performance. Surely it is better to do so using an unbiased algorithm? This is what happened in the Washington school district in the USA – an example described in some depth in Cathy O’Neil’s book. At the end of the 2009–2010 school year, all teachers were ranked, largely on the basis of a comparison of their pupils’ test scores between one year and the next. On the basis of this assessment, 2% of teachers were summarily dismissed and a further 5% lost their jobs the following year. But what if the algorithms were misconceived, and the teachers thus victimised were not bad teachers?

In this particular case, one of the fired teachers was rated very highly by her pupils and their parents. There was no way that she could work out the basis of the decision; later it emerged that it turned on this consecutive-year test score proxy, which had not taken into account the baseline performance from which those pupils came into her class.

It cannot be a good thing to have such decisions taken by an opaque process not open to scrutiny and criticism. Cathy O’Neil’s examples have been drawn from the USA, but Stephanie is pleased to note that since the Parliamentary Committee started looking at the effects of algorithms, more British examples have been emerging.

Summary:

  • They are often totally opaque, which makes them unchallengeable. If we don’t know how they are made, how do we know if they are weighted correctly? How do we know if they are fair?
  • Frequently, the decisions turned out by algorithms are not understood by the people who deliver that decision. This may be because a ‘machine learning’ system was involved, such that the intermediate steps between input and output are undiscoverable. Or it may be that the service was bought from a third party. This is what banks do with credit scores – they can tell you Yes or No, they can tell you what your credit score is, but they can’t explain how it was arrived at, and whether the data input was correct.
  • There are things that just can’t be measured with numbers. Consider again that example of teacher rankings; the algorithm just can’t process issues such as how a teacher deals with the difficult issues that pupils bring from their home life, not just the test results.
  • Systems sometimes cannot learn when they are wrong, if there is no mechanism for feedback and course correction.
  • Blind faith in technology can lead to the humans who implement those algorithmically-made decisions failing to take responsibility.
  • The perception that algorithms are unbiased can be unfounded – as Tamara had already explained. When it comes to ‘training’ the system, which data do you include, which do you exclude, and is the data set appropriate? If it was originally collected for another purpose, it may not fit the current one.
  • ‘Success’ can be claimed even when people are having harm done to them. In the public sector, managers may have a sense of problems being ‘fixed’ when teachers are fired. If the objective is to make or save money, and teachers are being fired, and resources saved to be redeployed elsewhere, or profits are being made, it can seem like the model is working. The fact that that objective defined at the start has been met, makes it justify itself. And if we can’t scrutinise or challenge, agree or disagree, we are stuck in that loop.
  • Bias can exist within the data itself. A good example is university admissions, where historical and outdated social norms which we don’t want to see persist, still lurk there. Using historical admissions data as a training data set can entrench bias.
  • Then there is the principle of ‘fairness’. Algorithms consider a slew of statistics, and come out with a probability that someone might be a risky hire, or a bad borrower, or a bad teacher. But is it fair to treat people on the basis of a probability? We have been pooling risk for decades when it comes to insurance cover – as a society we seem happy with that, though we might get annoyed when the premium is decided because of our age rather than our skill in driving. But when sending people to prison, are we happy to tolerate the same level of uncertainty within data? And is past behaviour really a good predictor of future behaviour? Would we as individual be happy to treated on the basis of profiling statistics?
  • Because algorithms are opaque, there is a lot of scope for ‘hokum’. Businesses are employing algorithms; government and its agencies, are buying their services; but if we don’t understand how the decisions are made, there is scope for agencies to be sold these services by snake oil salesmen.

What next?

In the first place, we need to know where algorithms are being used to support decision-making, so we know how to challenge the decision.

When the SciTech committee published its report at the end of May, Stephanie was delighted that they took her suggestion to ask government to publish a list of all public-sector uses of algorithms, and where that use is being planned, where they will affect significant decisions. The Committee also wants government to identify a minister to provide government-wide oversight of such algorithms, where they are being used by the public sector, to co-ordinate departments’ approaches to the development and deployment of algorithms, and such partnerships with the private sector. They also recommended ‘transparency by default’, where algorithms affect the public.

Secondly, we need to ask for the evidence. If we don’t know how these decisions are being made, we don’t know how to challenge them. Whether teacher performance is being ranked, criminals sentenced or services cut, we need to know how those decisions are being made. Organisations should apply standards to their own use of algorithms, and government should be setting the right example. If decision-support algorithms are being used in the public sector, it is so important that people are treated fairly, that someone can be held accountable, and that decisions are transparent, and that hidden prejudice is avoided.

The public sector, because it holds significant datasets, actually holds a lot of power that it doesn’t seem to appreciate. In a couple of cases recently, it’s given data away without demanding transparency in return. A notorious example was the 2016 deal between the Royal Free Hospital and Google DeepMind, to develop algorithms to predict kidney failure, which led to the inappropriate transfer of personal sensitive data.

In the Budget of November 2017, the government announced a new Centre for Data Ethics and Innovation, but it hasn’t really talked about its remit yet. It is consulting on this until September 2018, so maybe by the end of the year we will know something. The SciTech Committee report had lots of strong recommendations for what its remit should be, including evaluation of accountability tools, and examining biases.

The Royal Statistical Society also has a council on data ethics, and the Nuffield Foundation set up a new commission, now the Convention on Data Ethics. Stephanie’s concern is that we now have several different bodies paying attention, but they should all set out their remits to avoid the duplication of work, so we know whose reports to read, and whose recommendations to follow. There needs to be some joined-up thinking, but currently it seems none are listening to each other.

Who might create a clear standard framework for data ethics? Chi Onwurah, the Labour Shadow Minister for Business, Energy and Industrial Strategy, recently said that the role of government is not to regulate every detail, but to set out a vision for the type of society we want, and the principles underlying that. She has also said that we need to debate those principles; once they are clarified, it makes it easier (but not necessarily easy) to have discussions about the standards we need, and how to define them and meet them practically.

Stephanie looks forward to seeing the Government’s response to the Science and Technology Committee’s report – a response which is required by law.

A suggested Code of Conduct came out in late 2016, with five principles for algorithms and their use. They are Responsibility – someone in authority to deal with anything that goes wrong, and in a timely fashion; Explainability – and the new GDPR includes a clause giving a right to explanation, about decisions that have been made about you by algorithms. (Although this is now law, but much will depend on how it is interpreted in the courts.) The remaining three principles are Accuracy, Auditability and Fairness.

So basically, we need to ask questions about the protection of people, and there have to be these points of challenge. Organisations need to ensure mechanisms of recourse, if anything does go wrong, and they should also consider liability. In a recent speakimg engagement on this topic, Stephanie was speaking to a roomful of lawyers, and to them she said, they should not see this as a way to shirk liability, but think about what will happen.

This conversation is at the moment being driven by the autonomous car industry, who are worried about insurance and insurability. When something goes wrong with an algorithm, whose fault might it be? Is it the person who asked for it to be created, and deployed it? The person who designed it? Might something have gone wrong in the Cloud that day, such that a perfectly good algorithm just didn’t work as it was supposed to? ‘People need to get to grips with these liability issues now, otherwise it will be too late, and some individual or group of individuals will get screwed over,’ said Stephanie, ‘while companies try to say that it wasn’t their fault.’

Regulation might not turn out to be the answer. If you do regulate, what do you regulate? The algorithms themselves, similar to the manner in which medicines are scrutinised by the medicines regulator? Or the use of the algorithms? Or the outcomes? Or something else entirely?

Companies like Google, Facebook, Amazon, Microsoft – have they lost the ability to be able to regulate themselves? How are companies regulating themselves? Should companies regulate themselves? Stephanie doesn’t think we can rely on that. Those are some of the questions she put to the audience.

Tamara took back the baton. She noted, we interact extensively with AI though many aspects of our lives. Many jobs that have been thought of as a human preserve, thinking jobs, may become more automated, handled by a computer or neural network. Jobs as we know them now may not be the jobs of the future. Does that mean unemployment, or just a change in the nature of work? It’s likely that in future we will be working side by side with AI on a regular basis. Already, decisions about bank loans, insurance, parole, employment increasingly rely on AI.

As humans, we are used to interacting with each other. How will we interact with non-humans? Specifically, with AI entities? Tamara referenced the famous ‘ELIZA’ experiment conducted 1964–68 by Joseph Weizenbaum, in which a computer program was written to simulate a practitioner of person-centred psychotherapy, communicating with a user via text dialogue. In response to text typed in by the user, the ELIZA program responded with a question, as if trying sympathetically to elicit further explanation or information from the user. This illustrates how we tend to project human qualities onto these non-human systems. (A wealth of other examples are given in Sherry Turkle’s 1984 book, ‘The Second Self’.)

However, sometimes machine/human interactions don’t happen so smoothly. Robotics professor Masahiro Mori studies this in the 1970s, studying people’s reaction to robots made to appear human. Many people responded to such robots with greater warmth as they were made to appear more human, but at a certain point along that transition there was an experience of unease and revulsion which he dubbed the ‘Uncanny Valley’. This is the point when something jarring about the appearance, behaviour or mode of conversation with the artificial human makes you feel uncomfortable and shatters the illusion.

‘Uncanny Valley’ research has been continued since Mori’s original work. It has significance for computer-generated on-screen avatars, and CGI characters in movies. A useful discussion of this phenomenon can be found in the Wikipedia article at https://en.wikipedia.org/wiki/Uncanny_valley

There is a Virtual Personal Assistant service for iOS devices, called ‘Fin’, which Tamara referenced (see https://www.fin.com). Combining an iOS app with a cloud-based computation service, ‘Fin’ avoids some of the risk of Uncanny Valley by interacting purely through voice command and on-screen text response. Is that how people might feel comfortable interacting with an AI? Or would people prefer something that attempts to represent a human presence?

Clare Parry remarked that she had been at an event about care robots, where you don’t get an Uncanny Valley effect because despite a broadly humanoid form, they are obviously robots. Clare also thought that although robots (including autonomous cars) might do bad things, they aren’t going to do the kind of bad things that humans do, and machines do some things better than people do. An autonomous car doesn’t get drunk or suffer from road-rage…

Tamara concluded by observing that our interactions with these systems shapes how we behave. This is not a new thing – we have always been shaped by the systems and the tools that we create. The printing press moved us from an oral/social method of sharing stories, to a more individual experience, which arguably has made us more individualistic as a society. Perhaps our interactions with AI will shape us similarly, and we should stop and think about the implications for society. Will a partnership with AI bring out the best of our humanity, or make us more machine-like?

Tamara would prefer us not to think of Artificial Intelligence as a reified machine system, but of Intelligence Augmented, shifting the focus of discussion onto how these systems can help us flourish. And who are the people that need that help the most? Can we use these systems to deal with the big problems we face, such as poverty, climate change, disease and others? How can we integrate these computational assistances to help us make the best of what makes us human?

There was so much food for thought in the lectures that everyone was happy to talk together in the final discussion and the chat over refreshments that followed.  We could campaign to say, ‘We’ve got to understand the algorithms, we’ve got to have them documented’, but perhaps there are certain kinds of AI practice (such as those involved in medical diagnosis from imaging input) where it is just not going to be possible.

From a blog by Conrad Taylor, June 2018

Some suggested reading