Ethical Artificial Intelligence



What is ethical, or responsible, artificial intelligence (AI) ? In essence, we can identify three concerns/issues : “the moral behaviour of humans as they design, make, use and treat artificially intelligent systems.” “A concern with the behaviour of machines, in machine ethics” – for example, computational ethics. ” “The issue of a possible singularity due to superintelligent AI” – a fascinating glimpse into the future as computers might ‘take over’. Is this still science fiction ?

This seminar encompasses important topics for Knowledge Management and Information Management  practitioners. Topics will include ‘bias’ in the machine, in machine learning, in the content cycle, in the taxonomy, in the text analytics, in the algorithms  and, of course, in the real world. Critically, where do knowledge organisation systems fit in and how can practitioners play a role in creating ethical artificial intelligence ?  Should companies begin to develop an AI ethics strategy that is publicly available ?


Our speaker is Ahren Lehnert  – Senior Manager, Text Analytics Solutions of

He is based in Oakland, California, USA.

Ahren is a graduate of Eastern Michigan University in the Mid-West and a post-graduate of Stony Brook University in New York.

Ahren is a knowledge management professional passionate about knowledge capture, organisation, categorisation and discovery. His main areas of interest are text analytics, search and taxonomy and ontology construction, implementation and governance.

His fifteen years of experience spans many sectors including marketing, health care, Federal and State government agencies, commercial and e-commerce, geospatial, oil and gas, telecom and financial services.

Ahren is always seeking ways to improve the user experience through better functionality and the most ‘painless’ user experience possible based on the state of the industry, best practices and standards.


Time and Venue

Zoom on line meeting on Thursday July 22nd, 2021 at 2:30pm GMT




Slides will be available after the event.



Blog will be available after the event.


Study Suggestions

Study suggestions will be available after the event.


November 2020 Seminar: The framework and ISO standards for Collaboration, KM and Innovation and how these might be integrated into your organisation


This seminar, affectionately know as the Two Ronnie’s show, aimed to introduce the Five dimensions Framework of Collaboration, Knowledge and Innovation. They gave an overview of the collaboration, knowledge management and innovation published standards ISO44001, ISO 30401, ISO 56002. This was of great interest to the gathered crowd by establishing the relationships there.  They then tackled how it would be valuable to do more to integrate these into a more interdependent, holistic and integrated management system.  They talked about the relevance of systems thinking.  This was followed by some break out sessions where people could have focused discussions on the issues raised.  The meeting finished with a summary from the two Rons.


Ron Young is the founder of Knowledge Associates International, a knowledge management consulting and solutions group based at St Johns Innovation Centre, Cambridge U.K. He is acknowledged as a leading international expert and thought leader in strategic knowledge asset management and innovation. He specializes in knowledge driven results for organizations. He advised and assisted the UK DTI Innovation Unit in the production of the UK Government White Paper ‘UK Competitiveness in the Knowledge Driven Economy’ (1999).

He regularly provides keynote presentations and workshops at leading knowledge management and innovation conferences around the world. He has chaired for several years both the British Standards Institute (BSI) Knowledge Management Standards Committee and the European Knowledge Management Standards Committee.

He is a visiting lecturer for international business administration and global knowledge economy programs. He runs regular Knowledge Asset Management master classes at King’s College Cambridge University, UK. He is a consultant for the World Bank, Washington, USA, and for the European Commission, Joint Research Centre, Brussels.

He is currently developing knowledge management strategies and knowledge management & innovation conferences around the world. He has chaired for several years both the British Standards Institute (BSI) Knowledge Management Standards Committee and the European systems, and advising and assisting major multi-national corporations, international UN agencies, National governments, military, security, and professional institutions around the world. He was a lead consultant for the European Commission 2 Million euro ‘Know-Net’ project. He has joint authored seven books. His hobbies are flying, music, yoga and meditation, travel and philosophy.

Knowledge Associates – leverage the world’s knowledge (

Ron Donaldson is a self-employed knowledge ecologist working with methods and ideas from a range of disciplines such as problem solving, open innovation design thinking, collaborative community building through to using narrative frameworks to communicate complex ideas.  He works closely with the Cognitive Edge project.  Ron is a member of the NetIXK Committee and supports speakers at our seminars.

Time and Venue

November 26th at 2:30 pm on the Zoom platform. This is a virtual session.


Not available




See our blog report: Framework and ISO standards for Collaboration, KM and Innovation

Study Suggestions

ISO/IEC 27001:2013 Information technology – Security techniques – Information security management systems – Requirements.
ISO 56001:2014 Asset Management
ISO 56002:2019 Innovation Management – Innovation Management System – Guidance
ISO 9001:2015 Quality Management Implementation Guide
ISO 44001:2017 Collaborative Business Relationship Management
ISO 30401:2018 Knowledge Management Systems – Requirements

Also a useful site for KM writing:   Nick Milton is a director of ‘Knoco’ an international firm of Knowledge Management Consultants. His website is a cornucopia of KM material. Incidentally, the ‘4th enabler’ for KM is ‘governance’.
The Systemic Design Group may be of interest: https//

Blog for July 2020 Seminar: A Library during lockdown

Antony Groves has been working at the University of Sussex for 15 years starting in a ‘front line’ role and continuing on into his current job where he is always talking to and supporting a lot of students at both undergraduate and post-graduate levels. He is a member of CILIP and blogs for the Multi Media and Information Technology Group. Antony is a reflective practitioner and believes in making things happen. As of now there are two major priorities – proactively working towards making the UoS website accessible by the government deadline of September 23rd 2020 and reactively working to make the UoS website and services as useful as possible following the Covid19# lockdown in March.

Two key ideas – accessibility and usability. Accessibility can be straightforward things such as font size, change in colour and ensuring that the keyboard is operable. For more on accessibility

‘Strategic approaches to implementing accessibility’, more colloquially – ‘The Kent strategy slides’. 2019 saw over a million visits to the library website, 6,170 on the busiest day – Tuesday May 14th. There has been a shift (a pivot) from physical visits to digital space. The main focus is on the user.
At this time there is a rush to open things up after lockdown without necessarily thinking about who is coming through the door and what they want now. Doing updating and coding makes you ‘removed’ from the user. Government Design Principles are a good place to start –

Now this is for everyone. You start with ‘user needs’ and you design with data. You build ‘digital services’ not websites. Remember that ‘A service is something that helps people to do something’. Iterate, then iterate again. We began by speaking to the academic community and gathering feedback. Over 100 pieces of feedback were collected and grouped into four main themes: architecture, behaviour, content and labelling. Top tasks were identified (e.g. searching for and finding books, booking rooms, accessing an account) –
People mainly make use of a handful of tasks so develop these first.

Architecture – “Confusing having two types of navigation”.

Behaviour – “Have never used library search tabs”.

Content – “More photos of the library and more infographics”.

Labelling – “Skills hub should have a description mentioning academic skills”.

Design with data – We benchmarked with other institutions.

We looked at Google analytics – most/least viewed pages, along with bounce and exit rates. We ran ‘card sorts’ to determine site structure. We created user stories to help edit pages. This resulted in (two examples) – the new ‘Making your research available’ section has very low bounce and exit rates, and these have also dropped across the whole site indicating that people are finding what they expect to. The ‘Find a book in the library page’ had 6,785 views compared with 1,182 in the 2018 Autumn term when it was located in the ‘Using the Library’ section.

Iteration goes on and on. There is still much to ‘unpack’ and ‘improve’. User testing is currently being organised. Usage is being analysed to see which parts of the website are seeing fewer views and less engagement. Working with teams inside and outside the UoS Library to make the digital services as useful as they can be to our community.

When Covid19# hit the UK we considered carefully how to respond. We devised a three pronged approach : Pivot / Add / Hide. ‘The Pivot’ involved moving the library from a physical presence into a digital space. For example, study rooms were no longer available and room bookings were changed into zoom bookings. ‘The Add’ meant introducing new services. There is a ‘click and study service’ starting this week whereby individuals can book a study place. There is a ’click and collect service’ and ‘Library FAQ’s’ appropriate for the period of lockdown. ‘The Hide’ concerned removing information on the website that was no longer appropriate such as ‘Information for visitors’ Instead, we created a guide to ‘Open Access Items’ and a ‘Schools Guide’.

All this work has been recognised by a ‘Customer Service Excellence’ award.

Antony is pleased that the work of the UoS Library Staff has been recognised but he takes it with a ‘pinch of salt’ as he is intent on doing more ‘user testing’ and receiving much more feedback as well as talking to his community.

In conclusion, notification of the inspirer behind this approach to digital services – “Revisiting Ranganathan : Applying the Five Laws of Library Science to Inclusive Web Design”. Ten changes we’ve made to the library website since lockdown –

Rob Rosset 25/07/2020




Blog for May 2020: Gurteen knowledge cafe

How do we thrive in a hyper-connected, complex world?

An afternoon of conversation with David Gurteen

There was a great start to this Zoom meeting. David Gurteen gave some simple guidance to participants so we could all Zoom smoothly.  It was great best practice demo. We are all becoming good at Zoom but simple guidance on how to set the visuals, and mute the sound is a wise precaution to make sure we are all competent with the medium. He also set out how the seminar would be scheduled, with breakout groups and plenaries. It was to be just like a NetIKX seminar in the BDA meeting room, even though it was totally different! I felt we were in very safe hands, as David was an eary adopter of Zoom, but still recognizes that new people will benefit by clarity of what works best. Well done David.

The introduction set the scene for the content of our café.  We were looking at how we live in a hyper-connected complex rapidly evolving world. David outlined many dimensions to this connectedness, including transport changes, internet, social media, global finances…

In his view; over the last 75 years this increased connectivity has led to massive complexity, and today we can conceive of two worlds – an old world before the Second World War and a new world that has emerged since 1945.  Not only are our technological systems complex, but we human beings are immensely complex, non-rational, emotional creatures full of cognitive biases. This socio-technical complexity together with our human complexity has resulted in a world that is highly volatile, unpredictable, confusing, and ambiguous. Compare the world now, with the locally focused world that dominated the pre-war years.

Furthermore, this complexity is accelerating as we enter the fourth industrial revolution in which disruptive technologies and trends such as the Internet of Things, robotics, virtual reality, and artificial intelligence are rapidly changing the way we live and work. Our 20th-century ways of thinking about the world and our old command and control, hierarchical ways of working no longer serve us well in this complex environment.

Is it true that if we wish to thrive, we need to learn to see the world in a new light, think about it differently, and discover better ways in which to interact and work together?

Break out groups

With practiced expertise, David set us up into small break-out groups that discussed the talk so far.  Did we agree, or feel continuity was a stronger thread than change. Then we swapped groups to take the conversation on further.


After the break-out groups, David looked at the two linked ideas behind Conversational Leadership.  He had some wonderful quotes about leadership.  Was the old control and lead model gone?  Do leaders have to hold a specific role, or can we all give leadership when the opportunity is there?  Of course, David provided examples of this, but perhaps after the seminar a very powerful example stands out – the 22 year old footballer changing the mind of a government with an 80 seat majority! You don’t need to have the expected ‘correct’ label to be a powerful leader.


We also looked at the other element: talking underpins how we work together. Using old TV clips and quotes, David urged us to consider how we communicate with each other, and if there is scope to change the world through talking?  Again, there was plenty of food for thought as we consider new ideas such as ‘unconscious bias’, ‘media bubbles’, ‘fake news’ and the global reach of social media.

We then broke into small groups again, to take the conversation further, using David’s talk as a stimulus.


At the end of the break-out groups, we re-joined as a mass of faces smiling out of the screen, ready to share our thoughts.   It is a wonderful thing, when you make a point to see heads nodding across the Zoom squares.  I recommend this to anyone who has not tried it!!!

Some themes emerged from the many small group chats.  One was the question of the fundamental nature of change.  Was our world so different when the humans within it remain very much the same?  We looked very briefly at what we think human nature is and whether it remains a constant despite the massively different technology we use on a daily basis.   Even if humans are the same fallible clay, the many practical ways we can now communicate gives us much more potential to hear and be heard.

We also considered the role of trust. In our workplaces, trust often seems to be in short supply, but it is a key to leaders taking on authority without becoming authoritarian. The emphasis on blame culture and short-term advangabe has to be countered with building genuine trust.

Is there potential for self-governing teams? The idea sounds inviting but would not ensure good leadership or sharing of ideas.  The loudest voice might still monopolise attention. And with some justification, as not everyone wants to be pro-active. Some prefer to follow as their choice, and others like to take part but balk at the tedium of talking through every minute decision!  This idea may have potential, but we agreed it would not be a panacea.

We did agree that roles and rules could be positive to help give shape to our working lives, but that they need not constrict our options to lead when the time comes.  And we can see the leadership role that our professional calling suggests.   With so many new information channels, so many closed groups and so many conflicting pressures, as information or knowledge professionals, we can take a leadership role in helping and supporting our chosen groups of very human work colleagues to understand and thrive in this complex and evolving world. Conversational Leadership should be one of the tools we take away to enable our work with colleagues.

Final Notes:

The NetIKX team.

NetIKX is a community of interest based around Knowledge and Information Professionals. We run 6 seminars each year and the focus is always on top quality speakers and the opportunity to network with peers. We are delighted that the Lockdown has not stopped our seminars taking place and expect to take Zoom with us when we leave lockdown! You can find details of how to join the lively NetIKX community on our Members page.

Our Facilitator

David Gurteen is a writer, speaker, and conversational facilitator. The focus of his work is Conversational Leadership – a style of working where we appreciate the power of conversation and take a conversational approach to the way that we connect, relate, learn and work with each other.  He is the creator of the Knowledge Café – a conversational process to bring a group of people together to learn from each other, build relationships and make a better sense of a rapidly changing, complex, less predictable world. He has facilitated hundreds of Knowledge Cafés and workshops in over 30 countries around the world over the past 20 years. He is also the founder of the Gurteen Knowledge Community – a global network of over 20,000 people in 160 countries.  Currently, he is writing an online book on Conversational Leadership. You can join a Knowledge Café if you consult his website.

Blog for July 2019: Content strategy

The speakers for the NetIKX meeting i July 2019 were Rahel Baillie and Kate Kenyon.  Kate promised they would explain what Content Strategy is and what it isn’t and how it relates to the work of Knowledge Management Professionals.   The two speakers came at this work from different backgrounds.  Rahel deals with technical systems while Kate trained as a journalist and worked at the BBC.

Managing content is different from managing data.   Content has grammar and it means something to people.  Data such as a number in a field is less complex to manage.  This is important to keep in mind because businesses consistently make the mistake of trying to manage content as if it were data.  Content strategy is a plan for the design of content systems. A content system can be an organic, human thing.  It isn’t a piece of software, although it is frequently facilitated by software.  To put together a content strategy, you have to understand all the potential content you have at hand.  You want to create a system that gives repeatable and reliable results.  The system deals with who creates content, who checks it, who signs it off and where it is going to be delivered.   The system must govern the management of content throughout its entire lifecycle.  The headings Analyse, Collect, Manage and Deliver can be useful for this.

Kate pointed out that if you are the first person in your organisation to be asked to look at content strategy, you might find yourself working in all these areas, but in the long run, they should be delegated to the appropriate specialists who can follow the Content Strategy plan.  In brief, the first part of content strategy is to assess business need, express it in term of business outcomes and write a business case.   It is part of the job to get a decent budget for the work!  When you have a clear idea of what the business wants to achieve, the next question is – where are we now?  What should we look at?  You will need to audit current content and who is producing it and why.  Assess the roles of everyone in the content lifecycle – not just writers and editors but also who commissions, creates and manages it as well as who uploads it, and archives it.   Then look at the processes that enable this.  Benchmark against standards to see if the current system is ‘good enough’ for purpose.  Define the scope of your audit appropriately.  The audit is not a deliverable, though vital business information may emerge.  It is to help you see priorities by perhaps doing Gap Analysis.  Then create a requirements matrix which helps clarify what is top priority and what not.

From this produce a roadmap for change and each step of the way keep the business on side.  A document signed off by a Steering Committee is valuable to ensure the priorities are acknowledged by all!

The discussion that followed considered the work in relation to staff concerns.  For example people might be scared at the thought of change, or worried about their jobs.   It was great to have such experienced speakers to meet concerns that were raised.  The meeting ended with Kate demonstrating some of the positive outcomes that could be achieved for organisations.  There is huge potential for saving money and improving public facing content.

This is taken from a report by Conrad Taylor.  See the full report on Conrad Taylor’s website: Content Strategy

Blog for March 2019 Seminar: Open Data

The speaker at the NetIKX seminar in March 2019 was David Penfold, a veteran of the world of electronic publishing who also participates in ISO committees on standards for graphics technology.  He has been a lecturer at the University of the Arts London and currently teaches Information Management in a publishing context.

David’s talk looked at the two aspects of Open Data.  The most important thing for us to recognise is Data as the foundation and validation of Information.  He gave a series of interesting historical examples and pointed out that closer to the present day, quantum theory, relativity and much besides all developed because the data that people were measuring did not fit the predictions that earlier theoretical frameworks suggested.  A principle of experimental science is that if the data from your experiments don’t fit the predictions of your theories, it is the theories which must be revisited and reformulated.

David talked about some classificatory approaches. He mentioned the idea of a triple, where you have an entity, plus a concept of property, plus a value.  This three-element method of defining things is essential to the implementation of Linked Data.  Unless you can stablish relationships between data elements, they remain meaningless, just bare words or numbers.  A number of methods have been used to associate data elements with each other and with meaning.  The Relational Database model is one.  Spreadsheets are based on another model and the Standard Generalised Markup Language (and subsequently XML) was an approach to giving structure to textual materials.  Finally, the Semantic Web and the Resource Description Framework have developed over the last two decades

Moving on to what it means for data to be Open.  There are various misconceptions around this – it does not mean Open Access, a term used within the worlds of librarian ship and publishing to mean free-of-charge access, mainly to academic journals and books.  We are also not talking about Open Archiving, which has a close relationship to the Open Access concept.  Much of the effort in Open Archiving goes into developing standardised metadata so that archives can be shared.  Open data is freely available.  It is often from government but could be from other bodies and networks and even private companies.

We then watched a short piece of video showing Sir Nigel Shadbolt, in 2012, who was a founder of the Open Data Institute, which set up the open data portals for the UK government.  He explains how government publication of open data, in the interests of transparency is not found in many countries and at national, regional and local level.  The benefits include improved accountability, better public services, improvement in public participation, improved efficiency, creation of social value and innovation value to companies.

We heard about examples of Open Data, for example Network Rail publishes open data and benefits through improvements in customer satisfaction.  It says that its open data generates technology related jobs around the rail sector and saves costs in information provision when their parties invest in building information apps based on that data.    The data is used by commercial users too, but also the rail industry and Network Rail itself.   The data can also be accessed by individuals and academia.

Ordnance Survey open data is important within the economy and in governance.  David uses one application in his role as Chair of the Parish Council in his local village. The data allows them to see Historic England data for their area, and Environment Agency information showing sites of special scientific importance or areas of outstanding natural beauty.

After the tea-break, David showed three clips from a video of a presentation by Tim Berners-Lee.  David then explained how the Semantic Web works.  It is based on four concepts: a) metadata; b) structural relationships; d) tagging; d) the Resource Description Framework method of coding which in turn is based on XML.

The Open Data Institute has developed an ‘ethics canvas’, which we looked at to decide what we thought about it.  It gives a list of fifteen issues which may be of ethical concern.  We discussed this in our table groups and this was followed by a general discussion.  There were plenty of examples raised from our collective experience, which made for a lively end to the seminar.

This is taken from a report by Conrad Taylor

To see the full report follow this link: Conradiator : NetIKX meeting report : Open Data

Blog for January 2019: Wikipedia & knowledge sharing

In January 2019, NetIKX held a seminar on the topic – Wikipedia and other knowledge-sharing experiences.  Andy Mabbett gave a talk about one of the largest global projects in knowledge gathering in the public sphere; Wikipedia and its sister projects.  Andy is an experienced editor of Wikipedia with more than a million edits to his name.  He worked in website management and always kept his eyes open for new developments on the Web.  When he heard about the Wikipedia project, founded in 2001, he searched there for information about the local nature reserves.  He is a keen bird-watcher.  There was nothing to be found and this inspired him to add his first few entries.  He has been a volunteer since 2003 and makes a modest living with part of his income stream coming from training and helping others become Wikipedia contributors too.  The volunteers are expected to write publicly accessible material, not create new information.  The sources can be as diverse and scattered as necessary, but Wikipedia pulls that information together coherently and give links back to the sources.

The Wikipedia Foundations which hosts Wikipedia says: ‘imagine a world in which every single human being can freely share in the sum of all knowledge.  That is our commitment.’

Wikipedia is the free encyclopaedia that anybody can edit.  It is built by a community of volunteers contributing bit by bit over time.  The content is freely licensed for anybody to re-use, under a ‘creative commons attribution share-alike’ licence.  You can take Wikipedia content and use it on your own website, even in commercial publications and all you have to do in return is to say where you got it from.  The copyright in the content remains the intellectual property of the people who have written it.

The Wikimedia Foundation is the organisation which hosts Wikipedia.  They keep the servers and the software running.  The Foundation does not manage the content.  It occasionally gets involved over legal issues for example, child protection but otherwise they don’t set editorial policy or get involved in editorial conflicts.  That is the domain of the community.

Guidelines and principles.

Wikipedia operates according to a number of principles called the ‘five pillars’.

  • It is an encyclopaedia which means that there are things that it isn’t: it’s not a soap box, nor a random collection of trivia, nor a directory.
  • It’s written from a neutral point of view, striving to reflect what the rest of the world says about something.
  • As explained, everything is published under a Creative Commons open license.
  • There is a strong ethic that contributors should treat each other with respect and civility. That is the aim, although Wikipedia isn’t a welcoming space for female contributors and women’s issues are not as well addressed as they should be.  There are collective efforts to tackle the imbalance.
  • Lastly there is a rule that there are no firm rules! Whatever rule or norm there is on Wikipedia, you can break it if there is a good reason to do so.  This does give rise to some interesting discussions about how much weight should be given to precedent and established practice or whether people should be allowed to go ahead and do new and innovative things.

In Wikipedia, all contributors are theoretically equal and hold each other to account. There is no editorial board, there are no senior editors who carry a right of overrule or veto.  ‘That doesn’t quit work in theory’ says Andy, ‘but like the flight of the bumblebee, it works in practice’.  For example, in September 2018, newspapers ran a story that the Tate Gallery had decided to stop writing biographies of artists for their Website.  They would use copies of Wikipedia articles instead.  The BBC does the same, with biographies of musicians and bands on their website and also with articles about species of animals.  The confidence of these institutions comes because it is recognised that Wikipedians are good at fact-checking and that if errors are spotted or assertions made without a supporting reliable reference they get flagged up.   But there are some unintended consequences too.  Because dedicated Wikipedians have the habit of checking articles for errors and deficits, Wikipedia can be a very unfriendly place for new and inexperienced editors.  A new article can get critical ‘flags to show something needs further attention.  People can get quite zealous about fighting conflicts of interest, or bias or pseudo-science.

For most people there is just one Wikipedia.  But there are nearly 300 Wikipedias in different languages.  Several have over a million articles, some only a few thousand. Some are written in a language threatened with extinction and they constitute the only place where a community of people is creating a website in that language, to help preserve it as much as to preserve the knowledge.

Wikipedia also has a number of ‘sister projects’.  These include:

  • Wiktionary is a multi-lingual dictionary and thesaurus.
  • Wikivoyage is a travel guide
  • Wikiversity has a number of learning models so you can teach yourself something.
  • Wikiquote is a compendium of notable and humorous quotations.

Probably the Wikidata project is the most important of the sister projects, in terms of the impact it is having and its rate of expansion.  Many Wikipedia articles have an ‘infobox’ on the right side.  These information boxes are machine readable as they have a microformat mark-up behind the scenes.  From this came the idea of gathering all this information centrally.  This makes it easier to share across different versions of Wikipedia and it means all the Wikipedias can be updated together, for example, if someone well known dies.  Under their open licence, data can be used by any other project in the world.  Using the Wikidata identifiers for millions of things, can help your system become more interoperable with others.   As a result, there is a huge asset of data including that taken from other bodies (for example English Heritage or chemistry databases etc.

Wikipedia has many more such projects that Andy explained to us and the information was a revelation to most of us.  So we were then delighted to spend some time looking at an exercise in small groups.  This featured two speakers who talked about the way they had used a shared Content Management system to gather and share knowledge.  These extra speakers circulated round the groups to help the discussions.  The format was different to NetiKX usual breakout groups but feedback from participants was very positive.

This blog is based on a report by Conrad Taylor.

To see the full report you can follow this link: Conradiator : NetIKX meeting report : Wikipedia & knowledge sharing


Blog for the November 2018 seminar: Networks

The rise of on-line social network platforms such as Facebook has made the general population more network-aware. Yet, at the same time, this obscures the many other ways in which network concepts and analysis can be of use. Network Science was billed as the topic for the November 2018 NetIKX seminar, and in hopes that we would explore the topic widely, I did some preliminary reading.

I find that Network Science is perhaps not so much a discipline in its own right, as an approach with application in many fields – analysis of natural and engineered geography, transport and communication, trade and manufacture, even dynamic systems in chemistry and biology. In essence, the approach models ‘distinct elements or actors represented by nodes (or vertices) and the connections between [them] as links (or edges)’ (Wikipedia), and has strong links to a branch of mathematics called Graph Theory, building on work by Euler in the 18th century.

In 2005, the US National Academy of Sciences was commissioned by the US Army to prepare a general report on the status of Network Science and its possible application to future war-fighting and security preparedness: the promise was, that if the approach looked valuable, the Army would put money into getting universities to study the field. The NAS report is available publicly at and is worth a read. It groups the fields of application broadly into three: (a) geophysical and biological networks (e.g. river systems, food webs); (b) engineered networks (roads, electricity grid, the Internet); and (c) social networks and institutions.

I’ve prepared a one-page summary, ‘Network Science: some instances of networks and fields of complex dynamic interaction’, which also lists some further study resources, five books and an online movie. (Contact NetIKX if you want to see this). In that I also note: ‘We cannot consider the various types of network… to be independent of each other. Amazon relies on people ordering via the Internet, which relies on a telecomms network, and electronic financial transaction processing, all of which relies on the provision of electricity; their transport and delivery of goods relies on logistics services, therefore roads, marine cargo networks, ports, etc.’

The NetIKX seminar fell neatly into two halves. The first speaker, Professor Yasmin Merali of Hull University Business School, offered us a high-level theoretical view and the applications she laid emphasis on were those critical to business success and adaptation, and cybersecurity. Drew Mackie then provided a tighter focus on how social network research and ‘mapping’ can help to mobilise local community resources for social welfare provision.

Drew’s contribution was in some measure a reprise of the seminar he gave with David Wilcox in July 2016. Another NetIKX seminar which examined the related topics of graph databases and linked data graphs is that given by Dion Lindsay and Dave Clarke in January 2018.

Yasmin Merali noted that five years ago there wasn’t much talk about systems, but now it is commonplace for problems to be identified as ‘systemic’. Yet, ironically, Systems Thinking used to be very hot in the 1990s, later displaced by a fascination with computing technologies. Now once again we realise that we live in a very complex and increasingly unpredictable world of interactions at many levels; where the macro level has properties and behaviours that emerge from what happens at the micro level, without being consciously planned for or even anticipated. We need new analytical frameworks.

Our world is a Complex Adaptive System (CAS). It’s complex because of its many interconnected components, which influence and constrain and feed back upon each other. It is not deterministic like a machine, but more like a biological or ecological system. Complex Adaptive Systems are both stable (persistent) and malleable, with an ability to transform themselves in response to environmental pressures and stimuli – that is the ‘adaptive’ bit.

We have become highly attuned to the idea of networks through exposure to social media; the ideas of ‘gatekeepers’, popularity and influence in such a network are quite easy to understand. But this is selling short the potential of network analysis.

In successful, resilient systems, you will find a lot of diversity: many kinds of entity exist and interact within them. The links between entities in such systems are equally diverse. Links may persist, but they are not there for ever, nor is their nature static. This means the network can be ‘re-wired’, which makes adaptation easier.

Amazing non-linear effects can emerge from network organisation, and you can exploit this in two ways. If adverse phenomena are encountered, the network can implement a corrective feedback response very quickly (for example, to isolate part of the network, which is the correct public health response in the case of an epidemic). Or, if that reaction isn’t going to have the desired effect, we can try to re-wire the network, dampening some feedback loops, reinforcing others, and thus strengthening those ‘constellations’ of links which can best rise to the situation.

Information flows in the network. Yasmin offered us as analogy, the road network system, and distinct to that, the traffic running across that network. People writing about the power of social media have been concentrating on the network structure (the nodes, and the links), but not so much on the factors which enable or inhibit different kinds of dynamic within that structure.

Networks can enable efficient utilisation of distributed resources. We can also see networks as the locus where options are generated. Each change in a network brings about new conditions. But the generative capacity does come at a cost: you must allow sufficient diversity. Even if there are elements which don’t seem useful right now, there is a value in having redundant components: that’s how you get resilience.

You might extend network thinking outwards, beyond networking within one organisation, towards a number of organisations co-operating or competing with each other. Some of your potential partners can do better in the current system and with their resources than you; in another set of circumstances, it might be you who can do better. If we can co-operate, each tackling the risks we are best able to cope with, we can spread the overall risk and increase the capability pool.

Yasmin referred to the idea of ‘Six Degrees of Separation’ – that through intermediate connections, each of us is just six link-steps away from anybody else. The idea was important in the development of social network theory, but it turns out to have severe limitations, because where links are very tenuous, the degree of access or influence they imply can be illusory. That’s why simplistic social network graphs can be deceptive.

In a regular ‘small worlds’ network, everyone is connected to the same number of people in some organised way, and even one extra random link shortens the path length. It’s possible to ‘re-wire’ a network to get more of these small-world effects, with the benefit of making very quick transitions possible.

But there is another kind of network, similar in structure to the Internet and most of the biological systems we might consider – and that’s what we can call the ‘scale-free’ network. In this case, there is no cut-off limit to how large, or how well-connected a node can be.

Networks are also ‘lumpy’ – in large networks, there are very large hubs, but also adjacent less-prominent hubs, which in an Internet scenario are less likely to be attacked or degraded. This gives some hope that the system as a whole is less likely to be brought to its knees by a random attack; but a well-targeted attack against the larger hubs can indeed inflict a great deal of damage. This is something that concerns security-minded designers of networks for business. It is strategically imperative to have good intelligence about what is going on in a networked system – what are the entities, which of them are connected, and what is the nature of those connections and the information flows between them.

It’s important to distinguish between resilience and robustness. Resilience often comes from having network resources in place which may be redundant, may appear to be superfluous or of marginal value, but they provide a broader option space and a better ability to adapt to changing circumstance.

Looking more specifically at social networks, Yasmin referred to the ‘birds of a feather flock together’ principle, where people are clustered and linked based on similar values, aspirations, interests, ways of thinking etc. Networks like this are often efficient and fast to react, and much networking in business operates along those lines. However, within such a network, you are unlikely to encounter new, possibly valuable alternative knowledge and ways of thinking.

Heterogeneity of linkages may propagate along weaker links, but are valuable for expanding the knowledge pool. Expanded linkages may operate along the ‘six degrees’ principle, and through intermediate friends-of-friends, who serve both as transmitters and as filters. And yet a trend has been observed for social network engines (such as Facebook) to create a superdominance of ‘birds of a feather’ types of linkages, leading to confirmation bias and even polarisation.

In traditional ‘embodied’ social networks, people bonded and transacted with others whom they knew in relatively persistent ways, and could assess through an extended series of interactions in a broadly understandable context. In the modern cybersocial network, this is more difficult to re-create, because interactions occur through ‘shallow’ forms such as text and image – information is the main currency – and often between people who do not really know each other.

Another problem is the increased speed of information transfer, and decreased threshold of time for critical thought. Decent journalism has been one of the casualties. Yes, ‘citizen journalism’ via tweet or online video post can provide useful information – such informants can often go where the traditional correspondent could not – but verification becomes problematic, as does getting the broader picture, when competition between news channels to be first with the breaking story ‘trumps’ accuracy and broader context.

If we think of cybersocial networks as information networks, carrying information and meaning, things become interesting. Complexity comes not just from the arrangement of links and nodes, but also from the multiple versions of information, and whether a ‘message’ means the same to each person who receives it: there may be multiple frameworks of representation and understanding standing between you and the origin of the information.

This has ethical implications. Some people say that the Internet has pushed us into a new space. Yasmin argues that many of the issues are those we had before, only now more intensely. If we think about the ‘gig economy’, where labour value is extracted but workers have scant rights – or if we think about the ownership of data and the rights to use it, or surveillance culture – these issues have always been around. True, those problems are now being magnified, but maybe that cloud has a silver lining in forcing legislators to start thinking about how to control matters. Or is it the case that the new technologies of interaction have embedded themselves at such a fundamental level that we cannot shift them?

What worries Yasmin more are issues around Big Data. As we store increasingly large, increasingly granular data about people from sources such as fitbits, GPS trackers, Internet-of-Things devices, online searches… we may have more data, but are we better informed? Connectivity is said to be communication, but do we understand what is being said? The complexity of the data brings new challenges for ethics – often, you don’t know where it comes from, what was the quality of the instrumentation, and how to interpret the data sets.

And then there is artificial intelligence. The early dream was that AI would augment human capability, not displace it. In practice, it looks as if AI applications do have the potential to obliterate human agency. Historically, our frameworks for how to be in the world, how to understand it, were derived from our physical and social environment. Because our direct access to the physical world and the raw data derived from it is compromised, replaced by other people’s representation of other people’s possible worlds, we need to figure out whose ‘news’ we can trust.

When we act in response to the aggregated views of others, and messages filtered through the media, we can end up reinforcing those messages. Yasmin gave as an example rumours of the imminent collapse of a bank, causing a ‘bank run’ which actually does cause the bank’s collapse (in the UK, an example was the September 2007 run on Northern Rock). She also recounted examples of the American broadcast media’s spin on world events, such as the beginning of the war in Iraq, and 9/11. People chose to tune into to those media outlets whose view of the world they preferred. (‘Oh honey, why do you watch those channels? It’s so much nicer on Fox News.’

There is so much data available out there, that a media channel can easily find provable facts and package them together to support its own interpretation of the world. This process of ‘cementation’ of the silos makes dialogue between opposed camps increasingly difficult – a discontinuity of contemporaneous worlds. This raises questions about the way our contextual filtering is evolving in the era of the cybersocial. And if we lose our ‘contextual compass’, interpreting the world becomes more problematic.

In Artificial Intelligence, there are embedded rules. How does this affect human agency in making judgements? One may try to inject some serendipity into the process – but serendipity, said Yasmin, is not that serendipitous.

Yasmin left us with some questions. Who controls the network, and who controls the message? Should we be sitting back, or are their ethical considerations that mean we should be actively worrying about these things and doing what we can? What is it ethical not to have known, when things go wrong?


Drew Mackie prepares network maps for organisations; most of the examples he would give are in the London area. He declared he would not be talking about network theory, although much is implied, and underlies what he would address.

Mostly, Drew and his associates work with community groups. What they seek to ‘map’ are locally available resources, which may themselves be community groups, or agencies. In this context, one way to find out ‘where stuff is’ is to consult some kind of catalogue, such as those which local authorities prepare. And a location map will show you where stuff is. But when it comes to a network map, what we try to find out and depict is who collaborates with whom, across a whole range of agencies, community groups, and key individuals.

When an organisation commissions a network map from Drew, they generally have a clear idea of what they want to do with it. They may want to know patterns of collaboration, what assets are shared, who the key influencers are, and it’s because they want to use that information to influence policy, or to form projects or programmes in that area.

Drew explained that the kinds of network map he would be talking about are more than just visual representations that can be analysed according to various metrics. They are also a kind of database: they hold huge amounts of data in the nodes and connections, about how people collaborate, what assets they hold, etc. So really, what we create is a combination of a database and a network map, and as he would demonstrate, software can help us maintain both aspects.

If you want to build such a network map, it is essentially to appoint a Map Manager to control it, update it, and also promote it. Unless you generate and maintain that awareness, in six months the map will be dead: people won’t understand it, or why it was created.

Residents in the area may be the beneficiaries, but we don’t expect them to interact with the map to any great extent. The main users will be one step up. To collect the information that goes into building the map, and to encourage people to support the project, you need people who act as community builders; Drew and his colleagues put quite a lot of effort in training such people.

To do this, they use two pieces of online software: sumApp, and Kumu. SumApp is the data collection program, into which you feed data from various sources, and it automatically builds you a network map through the agency of Kumu, the network visualisation and analytics tool. Data can be exported from either of these.

When people contribute their data to such a system, what they see online is the sumApp front end; they contribute data, then they get to see the generated network map. No-one has to do any drawing. SumApp can be left open as a permanent portal to the network map, so people can keep updating their data; and that’s important, because otherwise keeping a network map up to date is a nightmare (and probably won’t happen, if it’s left to an individual to do).

The information entered can be tagged with a date, and this allows a form of visualisation that shows how the network changes over time.

Drew then showed us how sumApp works, first demonstrating the management ‘dashboard’ through which we can monitor who are the participants, the number of emails sent, connections made and received, etc. So that we can experience that ourselves should we wish, Drew said he would see about inviting everyone present to join the demonstration map.

Data is gathered in through a survey form, which can be customised to the project’s purpose. To gather information about a participant’s connections, sumApp presents an array of ‘cards’, which you can scroll through or search, to identify those with whom you have a connection; and if you make a selection, a pop-up box enquires how frequently you interact with that person – in general, that correlates well with how closely you collaborate – and you can add a little story about why you connect. Generally that is in words, but sound and video clips can also be added.

Having got ‘data input’ out of the way, Drew showed us how the map can be explored. You can see a complete list of all the members of the map. If you were to view the whole map and all its connections, you would see an undecipherable mess; but by selecting a node member and choosing a command, you can for example fade back all but the immediate (first-degree) connections of one node (he chose our member Steve Dale as an example). Or, you could filter to see only those with a particular interest, or other attribute in common.

Drew also demonstrated that you can ask to see who else is connected to one person or institution via a second degree of connection – for example, those people connected to Steve via Conrad. This is a useful tool for organisations which are seeking to understand the whole mesh of organisations and other contacts round about them. Those who are keenest in using this are not policy people or managers, but people with one foot in the community, and the other foot in a management role. People such as children’s centre managers, or youth team leaders – people delivering a service locally, but who want to understand the broader ecology…

Kumu is easy to use, and Drew and colleagues have held training sessions for people about the broad principles, only for those people to go home and, that night, draw their own Kumu map in a couple of hours – not untypically including about 80 different organisations.

Drew also demonstrated a network map created for the Centre for Ageing Better (CFAB). With the help of Ipsos MORI, they had produced six ‘personas’ which could represent different kinds of older people. One purpose of that project was to see how support services might be better co-ordinated to help people as they get older. Because Drew also talked through this in the July 2016 NetIKX meeting, I shall not cover it again here.

Drew also showed an example created in Graph Commons ( This network visualisation software has a nice feature that lets you get a rapid overview of a map in terms of its clusters, highlighting the person or organisation who is most central within that cluster, aggregating clusters for viewing purposes into a single higher-level node, and letting you explore the links between the clusters. The developers of sumApp are planning a forthcoming feature that will let sumApp work with Graph Commons as an alternative graph engine to Kumu.

In closing, Drew suggested that as a table-group exercise we should discuss ideas for how these insights, techniques and tools might be useful in our own work situations; note these on a sheet of flip-chart paper; and then we could later compare the outputs across tables.

Conrad Taylor

Blog for the September 2018 Seminar: Ontology is cool!

Our first speaker, Helen Lippell, is a freelance taxonomist and is an organiser of the annual Taxonomy Boot Camp in London.  She also works with organisation on constructing thesauri, ontologies and link data repositories.   As far as she is concerned, the point of ontology construction is to model the world to help meet business objectives, and that’s the practical angle from which she approached the topic.  Taxonomies and ontologies are strongly related.  Taxonomies are concerned with the relationships between the terms used in a domain, ontologies focus more on describing the things within the domain and the relationships between them.  Neither is inherently better: you choose what is appropriate for your business need.  An ontology offers greater capabilities and a gateway to machine reasoning, but if you don’t need those, the extra effort will not be worth it.  A taxonomy can provide the controlled vocabularies which help with navigation and search.

Using fascinating examples, Helen, listed a number of business scenarios in which ontologies can be helpful: information retrieval, classification, tagging and data manipulation.  She is doing a lot of work currently on an ontology that will help in content aggregation and filtering, automating a lot of processes that are currently manual.

Implementing an ontology project is not trivial.  It starts with a process of thoroughly understanding and modelling everything connected to the particular domain in which the project and business operate.  Information professionals are well suited to link between the people with technical skills and others who know the business better and can advocate for the end-users of these systems.

Finally, Helen discussed the software that can facilitate this work, both free and to be purchased.  Her talk was followed by an exercise where we produced our own model, with plenty of help and advice from the speakers. We looked at problems in London that we could help solve such as guiding visitors to London or a five-year ecology plan.  It was fun, although we were not quite up to achieving a high-quality product ready to change the world!

In the second part of the meeting, we heard from Silver Oliver, an information architect.  Again, there was a short talk and then a practical exercise.   We learnt that Domain Modelling is fundamental to compiling successful taxonomies, controlled vocabularies and classifications schemes, as well as formal ontologies.  When you set out to model a domain, it is beneficial to engage as many voices and perspectives as possible.   It is helpful to do this before you start exploring tools and implementations so that you don’t exclude people from being able to participate with their different views and perspectives.   The exercise that followed looked at creating a website focusing on food and recipes, which was a pleasant topic to work on in our small groups.

The seminar finished with a set of recommendations:

  • Don’t dive into software: start with whiteboards.
  • Don’t work alone data modelling in the corner. Domain modelling is all about understanding he domain, through conversation and building shared language.
  • Be wary of getting inspiration from other models you believe to be similar. Start with conversations instead – though stealing ideas ca be useful!
  • Rather than ‘working closed’ and revealing your results at the end – keep the processes open and show people what you are doing.
  • An evolving ontology of the domain is a good way to capture these discussions and agreements about what things mean.
  • Rather than evolving a humongous monolithic domain model which is hard to get your head around, work with smaller domains with bounded contexts.

That led to a break with refreshments and general conversations based on our experiences during the afternoon.

Extract from a report by Conrad Taylor.

If you want to read the full account of this seminar – follow this link:

Blog for the July 2018 seminar: Machines and Morality: Can AI be Ethical?

In discussions of AI, one issue that is often raised is that of the ‘black box’ problem, where we cannot know how a machine system comes to its decisions and recommendations. That is particularly true of the class of self-training ‘deep machine learning’ systems which have been making the headlines in recent medical research.

Dr Tamara Ansons has a background in Cognitive Psychology and works for Ipsos MORI, applying academic research, principally from psychology, to various client-serving projects. In her PhD work, she looked at memory and how it influences decision-making; in the course of that, she investigated neural networks, as a form of representation for how memory stores and uses information.

At our NetIKX seminar for July 2018, she observed that ‘Artificial Intelligence’ is being used across a range of purposes that affect our lives, from mundane to highly significant. Recently, she thinks, the technology has been developing so fast that we have not been stepping back enough to think about the implications properly.

Tamara displayed an amusing image, an array of small photos of round light-brown objects, each one marked with three dark patches. Some were photos of chihuahua puppies, and the others were muffins with three raisins on top! People can easily distinguish between a dog and a muffin, a raisin and an eye or doggy nose. But for a computing system, such tasks are fairly difficult. Given the discrepancy in capability, how confident should we feel about handing over decisions with moral consequences to these machines?

Tamara stated that the ideas behind neural networks have emerged from cognitive psychology, from a belief that how we learn and understand information is through a network of interconnected concepts. She illustrated this with diagrams in which one concept, ‘dog’, was connected to others such as ‘tail’, ‘has fur’, ‘barks’ [but note, there are dogs without fur and dogs that don’t bark]. From a ‘connectionist’ view, our understanding of what a dog is, is based around these features of identity, and how they are represented in our cognitive system. In cognitive psychology, there is a debate between this view and a ‘symbolist’ interpretation, which says that we don’t necessarily abstract from finer feature details, but process information more as a whole.

This connectionist model of mental activity, said Tamara, can be useful in approaching some specialist tasks. Suppose you are developing skill at a task that presents itself to you frequently – putting a tyre on a wheel, gutting fish, sewing a hem, planning wood. We can think of the cognitive system as having component elements that, with practice and through re-inforcement, become more strongly associated with each other, such that one becomes better at doing that task.

Humans tend to have a fairly good task-specific ability. We learn new tasks well, and our performance improves with practice. But does this encapsulate what it means to be intelligent? Human intelligence is not just characterised by ability to do certain tasks well. Tamara argued that what makes humans unique is our adaptability, the ability to learnings from one context and applying them imaginatively to another. And humans don’t have to learn something over many, many trials. We can learn from a single significant event.

An algorithm is a set of rules which specify how certain bits of information are combined in a stepwise process. As an example, Tamara suggested a recipe for baking a cake.

Many algorithms can be represented with a kind of node-link diagram that on one side specifies the inputs, and on the other side the outputs, with intermediate steps between to move from input to output. The output is a weighted aggregate of the information that went into the algorithm.

When we talk about ‘learning’ in the context of such a system – ‘machine learning’ is a common phrase – a feedback or evaluation loop assesses how successful the algorithms are at matching input to acceptable decision; and the system must be able to modify its algorithms to achieve better matches.

Tamara suggests that at a basic level, we must recognise that humans are the ones feeding training data to the neural network system – texts, images, audio etc. The implication is that the accuracy of machine learning is only as good as the data you give it. If all the ‘dog’ pictures we give it are of Jack Russell terriers, it’s going to struggle at identifying a Labrador as a dog. We should also think about the people who develop these systems – they are hardly a model of diversity, and women and ethnic minorities are under-represented. The cognitive biases of the developer community can influence how machine learning systems are trained, what classifications they are asked to apply, and therefore how they work.

If the system is doing something fairly trivial, such as guessing what word you meant to type when you make a keyboarding mistake, there isn’t much to worry about. But what if the system is deciding whether and on what terms to give us insurance, or a bank loan or mortgage? It is critically important that we know how these systems have been developed, and by whom, to ensure that there are no unfair biases at work.

Tamara said that an ‘AI’ system develops its understanding of the world from the explicit input with which it is fed. She suggested that in contrast, humans make decisions, and act, on the basis of myriad influences of which we are not always aware, and often can’t formulate or quantify. Therefore it is unrealistic, she suggests, to expect an AI to achieve a human subtlety and balance in its .

However, there have been some very promising results using AI in certain decision-making contexts, for example, in detecting certain kinds of disease. In some of these applications, it can be argued that the AI system can sidestep the biases, especially the attentional biases, of humans. But there are also cases where companies have allowed algorithms to act in highly inappropriate and insensitive ways towards individuals.

But perhaps the really big issue is that we really don’t understand what is happening inside these networks – certainly, the really ‘deep learning’ networks where the hidden inner layers shift towards a degree of inner complexity which it is beyond our powers to comprehend. This is an aspect which Stephanie would address.
Stephanie Mathieson is the policy manager at ‘Sense About Science’, a small independent campaigning charity based in London. SAS was set up in 2002 as the media was struggling to cope with science-based topics such as genetic modification in farming, and the alleged link between the MMR vaccine and autism.

SAS works with researchers to help them to communicate better with the public, and has published a number of accessible topic guides, such as ‘Making Sense of Nuclear’, ‘Making Sense of Allergies’ and other titles on forensic genetics, chemical stories in the press, radiation, drug safety etc. They also run a campaign called ‘Ask For Evidence’, equipping people to ask questions about ‘scientific’ claims, perhaps by a politician asking for your vote, or a company for your custom.

But Stephanie’s main focus is around their Evidence In Policy work, examining the role of scientific evidence in government policy formation. A recent SAS report surveyed how transparent twelve government departments are about their use of evidence. The focus is not about the quality of evidence, nor the appropriateness of policies, just on being clear what evidence was taken into account in making those decisions, and how. In talking about the use of Artificial Intelligence in decision support, ‘meaningful transparency’ would be the main concern she would raise.

Sense About Science’s work on algorithms started a couple of years ago, following a lecture by Cory Doctorow, the author of the blog Boing Boing, which raised the question of ‘black box’ decision making in people’s lives. Around the same time, similar concerns were being raised by by the independent investigative newsroom ‘ProPublica’, and Cathy O’Neil’s book ‘Weapons of Math Destruction’. The director of Sense About Science urged Stephanie to read that book, and she heartily recommends it.

There are many parliamentary committees which scrutinise the work of government. The House of Commons Science and Technology Committee has an unusually broad remit. They put out an open call to the public, asking for suggestions for enquiry topics, and Stephanie wrote to suggest the role of algorithms in decision-making. Together with seven or eight others, Stephanie was invited to come and give a presentation, and she persuaded the Committee to launch an enquiry on the issue.

The SciTech Committee’s work was disrupted by the 2016 snap general election, but they pursued the topic, and reported in May 2018. (See

Stephanie then treated us to a version of the ‘pitch’ which she gave to the Committee.

An algorithm is really no more than a set of steps carried out sequentially to give a desired outcome. A cooking recipe, directions for how to get to a place, are everyday examples. Algorithms are everywhere, many implemented by machines, whether controlling the operation of a cash machine or placing your phone call. Algorithms are also behind the analysis of huge amounts of data, carrying out tasks that would be beyond the capacity of humans, efficiently and cheaply, and bringing a great deal of benefit to us. They are generally considered to be objective and impartial.

But in reality, there are troubling issues with algorithms. Quite rapidly, and without debate, they have been engaged to make important decisions about our lives. Such a decision would in the past have been made by a human, and though that person might be following a formulaic procedure, at least you can ask a person to explain what they are doing. What is different about computer algorithms is their potential complexity and ability to be applied at scale; which means, if there are biases ingrained in the algorithm, or in the data selected for them to process, those shortcomings will also be applied at scale, blindly, and inscrutably.

  • In education, algorithms have been used to rank teachers, and in some cases, to summarily sack the ‘lower-performing’ ones.
  • Algorithms generate sentencing guidelines in the criminal justice system, where analysis has found that they are stacked against black people.
  • Algorithms are used to determine credit scores, which in turn determine whether you get a loan, a mortgage, a credit card, even a job.
  • There are companies offering to create a credit score for people who don’t have a credit history, by using ‘proxy data’. They do deep data mining, investigate how people use social media, how they buy stuff online, and other evidences.
  • The adverts you get to see on Google and Facebook are determined through a huge algorithmic trading market.
  • For people working for Uber or Deliveroo, their bosses essentially are algorithms.
  • Algorithms help the Government Digital Service to decide what pages to display on the Web site. The significance is, that site is the government’s interface with the public, especially now that individual departments have lost their own Web sites.
  • A recent Government Office for Science report suggests that government is very keen to increase its use of algorithms and Big Data – it calls them ‘data science techniques’ – in deploying resources for health, social care and the emergency services. Algorithms are being used in the fire service to determine which fire stations might be closed.

In China, the government is developing a comprehensive ‘social credit’ system – in truth, a kind of state-run reputation ranking system – where citizens will get merits or demerits for various behaviours. Living in a modestly-sized apartment might add points to your score; paying bills late or posting negative comments online would be penalised. Your score would then determine what resources you will have access to. For example, anyone defaulting on a court-ordered fine will not be allowed to buy first-class rail tickets, or to travel by air, or take a package holiday. That scheme is already in pilots now, and is supposed to be fully rolled out as early as 2020.

(See Wikipedia article at and Wired article at

Stephanie suggested a closer look at the use of algorithms to rank teacher performance. Surely it is better to do so using an unbiased algorithm? This is what happened in the Washington school district in the USA – an example described in some depth in Cathy O’Neil’s book. At the end of the 2009–2010 school year, all teachers were ranked, largely on the basis of a comparison of their pupils’ test scores between one year and the next. On the basis of this assessment, 2% of teachers were summarily dismissed and a further 5% lost their jobs the following year. But what if the algorithms were misconceived, and the teachers thus victimised were not bad teachers?

In this particular case, one of the fired teachers was rated very highly by her pupils and their parents. There was no way that she could work out the basis of the decision; later it emerged that it turned on this consecutive-year test score proxy, which had not taken into account the baseline performance from which those pupils came into her class.

It cannot be a good thing to have such decisions taken by an opaque process not open to scrutiny and criticism. Cathy O’Neil’s examples have been drawn from the USA, but Stephanie is pleased to note that since the Parliamentary Committee started looking at the effects of algorithms, more British examples have been emerging.


  • They are often totally opaque, which makes them unchallengeable. If we don’t know how they are made, how do we know if they are weighted correctly? How do we know if they are fair?
  • Frequently, the decisions turned out by algorithms are not understood by the people who deliver that decision. This may be because a ‘machine learning’ system was involved, such that the intermediate steps between input and output are undiscoverable. Or it may be that the service was bought from a third party. This is what banks do with credit scores – they can tell you Yes or No, they can tell you what your credit score is, but they can’t explain how it was arrived at, and whether the data input was correct.
  • There are things that just can’t be measured with numbers. Consider again that example of teacher rankings; the algorithm just can’t process issues such as how a teacher deals with the difficult issues that pupils bring from their home life, not just the test results.
  • Systems sometimes cannot learn when they are wrong, if there is no mechanism for feedback and course correction.
  • Blind faith in technology can lead to the humans who implement those algorithmically-made decisions failing to take responsibility.
  • The perception that algorithms are unbiased can be unfounded – as Tamara had already explained. When it comes to ‘training’ the system, which data do you include, which do you exclude, and is the data set appropriate? If it was originally collected for another purpose, it may not fit the current one.
  • ‘Success’ can be claimed even when people are having harm done to them. In the public sector, managers may have a sense of problems being ‘fixed’ when teachers are fired. If the objective is to make or save money, and teachers are being fired, and resources saved to be redeployed elsewhere, or profits are being made, it can seem like the model is working. The fact that that objective defined at the start has been met, makes it justify itself. And if we can’t scrutinise or challenge, agree or disagree, we are stuck in that loop.
  • Bias can exist within the data itself. A good example is university admissions, where historical and outdated social norms which we don’t want to see persist, still lurk there. Using historical admissions data as a training data set can entrench bias.
  • Then there is the principle of ‘fairness’. Algorithms consider a slew of statistics, and come out with a probability that someone might be a risky hire, or a bad borrower, or a bad teacher. But is it fair to treat people on the basis of a probability? We have been pooling risk for decades when it comes to insurance cover – as a society we seem happy with that, though we might get annoyed when the premium is decided because of our age rather than our skill in driving. But when sending people to prison, are we happy to tolerate the same level of uncertainty within data? And is past behaviour really a good predictor of future behaviour? Would we as individual be happy to treated on the basis of profiling statistics?
  • Because algorithms are opaque, there is a lot of scope for ‘hokum’. Businesses are employing algorithms; government and its agencies, are buying their services; but if we don’t understand how the decisions are made, there is scope for agencies to be sold these services by snake oil salesmen.

What next?

In the first place, we need to know where algorithms are being used to support decision-making, so we know how to challenge the decision.

When the SciTech committee published its report at the end of May, Stephanie was delighted that they took her suggestion to ask government to publish a list of all public-sector uses of algorithms, and where that use is being planned, where they will affect significant decisions. The Committee also wants government to identify a minister to provide government-wide oversight of such algorithms, where they are being used by the public sector, to co-ordinate departments’ approaches to the development and deployment of algorithms, and such partnerships with the private sector. They also recommended ‘transparency by default’, where algorithms affect the public.

Secondly, we need to ask for the evidence. If we don’t know how these decisions are being made, we don’t know how to challenge them. Whether teacher performance is being ranked, criminals sentenced or services cut, we need to know how those decisions are being made. Organisations should apply standards to their own use of algorithms, and government should be setting the right example. If decision-support algorithms are being used in the public sector, it is so important that people are treated fairly, that someone can be held accountable, and that decisions are transparent, and that hidden prejudice is avoided.

The public sector, because it holds significant datasets, actually holds a lot of power that it doesn’t seem to appreciate. In a couple of cases recently, it’s given data away without demanding transparency in return. A notorious example was the 2016 deal between the Royal Free Hospital and Google DeepMind, to develop algorithms to predict kidney failure, which led to the inappropriate transfer of personal sensitive data.

In the Budget of November 2017, the government announced a new Centre for Data Ethics and Innovation, but it hasn’t really talked about its remit yet. It is consulting on this until September 2018, so maybe by the end of the year we will know something. The SciTech Committee report had lots of strong recommendations for what its remit should be, including evaluation of accountability tools, and examining biases.

The Royal Statistical Society also has a council on data ethics, and the Nuffield Foundation set up a new commission, now the Convention on Data Ethics. Stephanie’s concern is that we now have several different bodies paying attention, but they should all set out their remits to avoid the duplication of work, so we know whose reports to read, and whose recommendations to follow. There needs to be some joined-up thinking, but currently it seems none are listening to each other.

Who might create a clear standard framework for data ethics? Chi Onwurah, the Labour Shadow Minister for Business, Energy and Industrial Strategy, recently said that the role of government is not to regulate every detail, but to set out a vision for the type of society we want, and the principles underlying that. She has also said that we need to debate those principles; once they are clarified, it makes it easier (but not necessarily easy) to have discussions about the standards we need, and how to define them and meet them practically.

Stephanie looks forward to seeing the Government’s response to the Science and Technology Committee’s report – a response which is required by law.

A suggested Code of Conduct came out in late 2016, with five principles for algorithms and their use. They are Responsibility – someone in authority to deal with anything that goes wrong, and in a timely fashion; Explainability – and the new GDPR includes a clause giving a right to explanation, about decisions that have been made about you by algorithms. (Although this is now law, but much will depend on how it is interpreted in the courts.) The remaining three principles are Accuracy, Auditability and Fairness.

So basically, we need to ask questions about the protection of people, and there have to be these points of challenge. Organisations need to ensure mechanisms of recourse, if anything does go wrong, and they should also consider liability. In a recent speakimg engagement on this topic, Stephanie was speaking to a roomful of lawyers, and to them she said, they should not see this as a way to shirk liability, but think about what will happen.

This conversation is at the moment being driven by the autonomous car industry, who are worried about insurance and insurability. When something goes wrong with an algorithm, whose fault might it be? Is it the person who asked for it to be created, and deployed it? The person who designed it? Might something have gone wrong in the Cloud that day, such that a perfectly good algorithm just didn’t work as it was supposed to? ‘People need to get to grips with these liability issues now, otherwise it will be too late, and some individual or group of individuals will get screwed over,’ said Stephanie, ‘while companies try to say that it wasn’t their fault.’

Regulation might not turn out to be the answer. If you do regulate, what do you regulate? The algorithms themselves, similar to the manner in which medicines are scrutinised by the medicines regulator? Or the use of the algorithms? Or the outcomes? Or something else entirely?

Companies like Google, Facebook, Amazon, Microsoft – have they lost the ability to be able to regulate themselves? How are companies regulating themselves? Should companies regulate themselves? Stephanie doesn’t think we can rely on that. Those are some of the questions she put to the audience.

Tamara took back the baton. She noted, we interact extensively with AI though many aspects of our lives. Many jobs that have been thought of as a human preserve, thinking jobs, may become more automated, handled by a computer or neural network. Jobs as we know them now may not be the jobs of the future. Does that mean unemployment, or just a change in the nature of work? It’s likely that in future we will be working side by side with AI on a regular basis. Already, decisions about bank loans, insurance, parole, employment increasingly rely on AI.

As humans, we are used to interacting with each other. How will we interact with non-humans? Specifically, with AI entities? Tamara referenced the famous ‘ELIZA’ experiment conducted 1964–68 by Joseph Weizenbaum, in which a computer program was written to simulate a practitioner of person-centred psychotherapy, communicating with a user via text dialogue. In response to text typed in by the user, the ELIZA program responded with a question, as if trying sympathetically to elicit further explanation or information from the user. This illustrates how we tend to project human qualities onto these non-human systems. (A wealth of other examples are given in Sherry Turkle’s 1984 book, ‘The Second Self’.)

However, sometimes machine/human interactions don’t happen so smoothly. Robotics professor Masahiro Mori studies this in the 1970s, studying people’s reaction to robots made to appear human. Many people responded to such robots with greater warmth as they were made to appear more human, but at a certain point along that transition there was an experience of unease and revulsion which he dubbed the ‘Uncanny Valley’. This is the point when something jarring about the appearance, behaviour or mode of conversation with the artificial human makes you feel uncomfortable and shatters the illusion.

‘Uncanny Valley’ research has been continued since Mori’s original work. It has significance for computer-generated on-screen avatars, and CGI characters in movies. A useful discussion of this phenomenon can be found in the Wikipedia article at

There is a Virtual Personal Assistant service for iOS devices, called ‘Fin’, which Tamara referenced (see Combining an iOS app with a cloud-based computation service, ‘Fin’ avoids some of the risk of Uncanny Valley by interacting purely through voice command and on-screen text response. Is that how people might feel comfortable interacting with an AI? Or would people prefer something that attempts to represent a human presence?

Clare Parry remarked that she had been at an event about care robots, where you don’t get an Uncanny Valley effect because despite a broadly humanoid form, they are obviously robots. Clare also thought that although robots (including autonomous cars) might do bad things, they aren’t going to do the kind of bad things that humans do, and machines do some things better than people do. An autonomous car doesn’t get drunk or suffer from road-rage…

Tamara concluded by observing that our interactions with these systems shapes how we behave. This is not a new thing – we have always been shaped by the systems and the tools that we create. The printing press moved us from an oral/social method of sharing stories, to a more individual experience, which arguably has made us more individualistic as a society. Perhaps our interactions with AI will shape us similarly, and we should stop and think about the implications for society. Will a partnership with AI bring out the best of our humanity, or make us more machine-like?

Tamara would prefer us not to think of Artificial Intelligence as a reified machine system, but of Intelligence Augmented, shifting the focus of discussion onto how these systems can help us flourish. And who are the people that need that help the most? Can we use these systems to deal with the big problems we face, such as poverty, climate change, disease and others? How can we integrate these computational assistances to help us make the best of what makes us human?

There was so much food for thought in the lectures that everyone was happy to talk together in the final discussion and the chat over refreshments that followed.  We could campaign to say, ‘We’ve got to understand the algorithms, we’ve got to have them documented’, but perhaps there are certain kinds of AI practice (such as those involved in medical diagnosis from imaging input) where it is just not going to be possible.

From a blog by Conrad Taylor, June 2018

Some suggested reading