January 2019 Seminar: Making and sharing knowledge in communities: can wikis and related tools help?

Summary

At this meeting Andy Mabbett, a hugely experienced Wikipedia editor, gave an introduction to the background of Wikipedia and discussed many of the issues that it raises.
Accumulating, organising and sharing knowledge is never easy; this is the problem Knowledge Management sought to address. Today we hope networked electronic platforms can facilitate the process. They are never enough in themselves, because the issues are essentially human, to do with attitudes, social dynamics and work culture — but good tools certainly help.

In past seminars, NetIKX has looked at MS Sharepoint, but that is proprietary and commercial, and it doesn’t work for wider communities of practice and interest. In this seminar, we looked at a range of alternatives, some of them free of charge and/or open source, together with the social dynamics that make them succeed or fail.
First we looked at the wiki model. The case study was Wikipedia — famous, but poorly understood. Andy Mabbett presented this. Andy is a hugely experienced Wikipedia editor, who inspires respect and affection around the world for his ability to explain how Wikipedia works, and for training novices contributing content – including as a ‘Wikipedian In Residence’ encouraging scientific and cultural organisations to contribute their knowledge to Wikipedia.

A few stats: Wikipedia, the free online encyclopedia that anyone can in theory edit, has now survived 18 years, existing on donations and volunteering. It has accumulated over 40 million articles in 301 languages, and about 500 million visitors a month. The English edition has nearly 5.8 million articles. There are about 300,000 active contributors, of whom 4,000 make over a hundred edits annually.

Under the wider banner of ‘Wikimedia’, there are sister projects such as Wiktionary, Wikiversity, which hosts free learning materials, Wikidata, which is developing a large knowledge base, and the Wikimedia Commons, which holds copyright-free photos, audio and other multimedia resources.

And yet, as the Wikipedia article on Wikipedia admits, “Wikipedia has been criticized for exhibiting systemic bias, for presenting a mixture of ‘truths, half truths, and some falsehoods’, and for being subject to manipulation and spin in controversial topics.” This isn’t so surprising, because humans are involved. It’s a community that has had to struggle with issues of authority and quality control, partiality and sundry other pathologies. Andy provided insight into these problems, and explained how the Wikipedia community organises itself to define, defend and implement its values.

No NetIKX seminar would be complete without syndicate sessions, conducted in parallel table groups. For the second half of the afternoon, each group was presented in turn with tales from two further case studies of knowledge sharing using different platforms and operating under different rules. These endeavours might have used email lists, Google Docs, another kind of wiki software, or some other kind of groupware. There were tales of triumph, but of tribulation too.

At the end of the afternoon polling thoughts helped to identify key factors that may point the way towards building better ways of sharing knowledge.

Speakers

Andy Mabbett has been a Wikipedia editor (as User:Pigsonthewing) since 2003 and involved with Wikidata since its inception in 2012. He has given presentations about Wikimedia projects on five continents, and has a great deal of experience working with organisations that wish to engage with Wikipedia and its sister projects. With a background in programming and managing websites for local government, Andy has been ‘Wikimedian in Residence’ at ORCID; TED; the Royal Society of Chemistry; The Physiological Society; the History of Modern Biomedicine Research Group; and various museums, galleries and archives. He is also the author of three books on the rock band Pink Floyd.

Our case-study witnesses

Sara Culpin is currently Head of Information & Knowledge at CRU International, where she has implemented a successful information and knowledge strategy on a shoestring budget. Since graduating from Loughborough University, she has spent over 25 years in information and knowledge roles at Aon, AT Kearney, PwC, and Deloitte. She is passionate about getting colleagues to share their knowledge across their organisations, while ensuring that their senior managers see the business value. https://www.linkedin.com/in/sara-culpin-2a1b051

Dr Richard Millwood has a background in school maths education, with a history of applying computers to education, and is Director of Core Education UK. As a researcher in the School of Computer Science & Statistics, Trinity College Dublin, he is developing a community of practice for computer science teachers in Ireland and creating workshops for families to develop creative use of computers together. In the 1990s Richard worked with Professor Stephen Heppell to create Ultralab, the learning technology research centre at Anglia Polytechnic University, acting as head 2005–2007. He researched innovation in online higher education in the Institute for Educational Cybernetics at the University of Bolton until 2013, gaining a PhD by Practice in ‘The Design of Learner-centred, Technology-enhanced Education’.

Time and Venue

2pm on 24th January 2019, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

None

Slides

No slides available for this presentation

Tweets

#netikx96

Blog

See our blog report: Wikipedia & knowledge sharing

Study Suggestions

Andy Mabbett, experienced Wikipedian

Andy Mabbett is an experienced editor of Wikipedia with more than a million edits to his name. Here’s a link to a ‘Wikijabber’ audio interview with Andy by Sebastian Wallroth (Sept 2017)
https://wikijabber.com/wikijabber-0005-with-pigsonthewing/

November 2018 Seminar: The Networkness of Networks

Summary

At this meeting Yasmin Merali, Professor of Systems Thinking and Director of the Centre for Systems Studies at Hull University Business School, and Drew Mackie gave an introduction to network science and demonstrated some practical applications.

Speakers

Yasmin Merali is Professor of Systems Thinking and Director of the Centre for Systems Studies at Hull University Business School. Prior to that she was Co-director of the Doctoral Training Centre for Complexity Science at the University of Warwick and served as Director of Warwick Business School’s Information Systems Research Unit until 2006. Professor Merali is an Expert Evaluator for the EU and was elected to the Executive Committee of the Council of the European Complex Systems Society in 2012 and the Board of the UNESCO Unitwin Complex Systems Digital Campus in 2013. Her research is trans-disciplinary, using complexity theory to address issues of transformation in internet-enabled socio-economic contexts, focusing on network dynamics and the emergence and co-evolution of socio-economic structures. She has extensive consultancy experience in public, private, and third sector organizations, and received a BT Fellowship and an IBM Faculty Award for her work on knowledge management and complexity.
Drew Mackie is a recognised expert in the Kumu online system of network visualisation and is particularly interested in using network methods to evaluate changes in connectivity over the life of projects.
Drew has been active in the Joined Up Digital project for the Centre for Ageing Better, following an exploration into Living Well in the Digital Age with the Age Action Alliance. He has also been involved in social network mapping for the Croydon Best Start programme.

Time and Venue

2pm on 15th November 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

The internet and advances in information and communications are implicated in the emergence of the network economy and the network society. Greater connectivity and access to increased variety and volume of information enable new and complex forms of organisation. This presents opportunities and threats that are challenging both, public and private sector institutions.
This session looks at the quest for more effective ways of dealing with the uncertainties and dynamism of the network economy whilst maximising the opportunities afforded by the Internet and associated technologies. The main speaker was Professor Yasmin Merali, who explores how understanding the “networkness” of networks may enable us to understand the emerging context and to harness network forms of organisation to deliver transformational capacity or stability as appropriate in the face of environmental turbulence.
The afternoon will then feature practical discussion, in which those present can share examples from their own experience. This will be facilitated by Drew Mackie, who has a huge range of practical expertise working in this field.
This seminar will be our ‘Community Network’ meeting to which we welcome practitioners from our colleagues in other IKM networks as our guests.

Slides

No slides available

Tweets

#netikx95 There were no tweets from this meeting due to a power cut.

Blog

See our blog report: Networks

Study Suggestions

Have a look at the Centre for Systems Studies at Hull University: https://www.google.com/search?client=firefox-b-d&q=Centre+for+Systems+Studies+%7C+University+of+Hull

September 2018 Seminar: Ontologies and domain modelling: a fun (honest!) and friendly introduction

Summary

At this lively meeting Helen Lippell and Silver Oliver introduced ontologies and explained how they could be used. Michael Smethurst and Anya Somerville ran an interactive practical session

Speakers

Helen Lippell has run her own consultancy since 2007, working as a specialist in taxonomy, metadata, ontologies and enterprise search. She loves getting stuck into projects and working with clients to figure out how best to use the messy content and data they have. She has supported organisations such as the BBC, gov.uk, Financial Times, Pearson, and Electronic Arts.
Silver Oliver has worked as an Information Architect for many years. Previously he has worked with the BBC, British Library and government. For the last 10 years he has worked at Data Language, a small consultancy specialising in semantics. His areas of expertise include all areas of information architecture but focuses primarily on the role of domain modelling in delivering design solutions.
Michael Smethurst has worked as an Information Architect for over ten years. Prior to working for the UK Parliament, he worked at the BBC and BBC R&D on a variety of projects, ranging from programmes, iPlayer, news, sport and food. Here he brought together practices from the semantic web and the domain-driven design community. He now works as a data architect for the UK Parliament using the same methods to understand and document parliamentary processes, work flows and data flows.
Anya Somerville is Head of Indexing and Data Management for the House of Commons Library, where she leads a team of information specialists. The team adds subject indexing, links and other metadata to parliamentary business data. It also manages Parliament’s controlled vocabulary. Anya and her team work closely with Michael and Silver on the domain models for parliamentary business. A pdf flyer for this meeting can be downloaded from the link Ontologies and domain modelling

Time and Venue

2pm on 20th September 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

What exactly is an ontology? How can we use them to better understand our information environments? Helen Lippell and Silver Oliver will be explaining all, providing examples from projects they have worked on, and giving you the chance to build your own ontology and domain model. Helen will give an accessible introduction to what ontologies are, how they are being used in a variety of different applications, how they differ from taxonomies, and how you can combine taxonomies and ontologies in models. This introduction assumes no prior knowledge of ontologies or semantic technologies.
Silver will be explaining how ontologies are used in domain modelling, demystifying some of the terminology, and providing case studies to demonstrate ontologies in practice. There will be the chance to get pens and paper out to produce and develop your own ontology and domain model, with additional help from experienced domain modellers Michael and Anya. You will learn the basic ideas around ontologies and domain modelling and see how ontologies can be used to better understand our information environments. You will begin to learn how to develop and use ontologies

Slides

Slides available.

Tweets

#netikx94

Blog

See our blog report: Ontologies and Domain Modelling

Study Suggestions

Take a look at the Simple Knowledge Organization System Namespace Document

https://www.w3.org/2009/08/skos-reference/skos.html

Blog for the July 2018 seminar: Machines and Morality: Can AI be Ethical?

In discussions of AI, one issue that is often raised is that of the ‘black box’ problem, where we cannot know how a machine system comes to its decisions and recommendations. That is particularly true of the class of self-training ‘deep machine learning’ systems which have been making the headlines in recent medical research.

Dr Tamara Ansons has a background in Cognitive Psychology and works for Ipsos MORI, applying academic research, principally from psychology, to various client-serving projects. In her PhD work, she looked at memory and how it influences decision-making; in the course of that, she investigated neural networks, as a form of representation for how memory stores and uses information.

At our NetIKX seminar for July 2018, she observed that ‘Artificial Intelligence’ is being used across a range of purposes that affect our lives, from mundane to highly significant. Recently, she thinks, the technology has been developing so fast that we have not been stepping back enough to think about the implications properly.

Tamara displayed an amusing image, an array of small photos of round light-brown objects, each one marked with three dark patches. Some were photos of chihuahua puppies, and the others were muffins with three raisins on top! People can easily distinguish between a dog and a muffin, a raisin and an eye or doggy nose. But for a computing system, such tasks are fairly difficult. Given the discrepancy in capability, how confident should we feel about handing over decisions with moral consequences to these machines?

Tamara stated that the ideas behind neural networks have emerged from cognitive psychology, from a belief that how we learn and understand information is through a network of interconnected concepts. She illustrated this with diagrams in which one concept, ‘dog’, was connected to others such as ‘tail’, ‘has fur’, ‘barks’ [but note, there are dogs without fur and dogs that don’t bark]. From a ‘connectionist’ view, our understanding of what a dog is, is based around these features of identity, and how they are represented in our cognitive system. In cognitive psychology, there is a debate between this view and a ‘symbolist’ interpretation, which says that we don’t necessarily abstract from finer feature details, but process information more as a whole.

This connectionist model of mental activity, said Tamara, can be useful in approaching some specialist tasks. Suppose you are developing skill at a task that presents itself to you frequently – putting a tyre on a wheel, gutting fish, sewing a hem, planning wood. We can think of the cognitive system as having component elements that, with practice and through re-inforcement, become more strongly associated with each other, such that one becomes better at doing that task.

Humans tend to have a fairly good task-specific ability. We learn new tasks well, and our performance improves with practice. But does this encapsulate what it means to be intelligent? Human intelligence is not just characterised by ability to do certain tasks well. Tamara argued that what makes humans unique is our adaptability, the ability to learnings from one context and applying them imaginatively to another. And humans don’t have to learn something over many, many trials. We can learn from a single significant event.

An algorithm is a set of rules which specify how certain bits of information are combined in a stepwise process. As an example, Tamara suggested a recipe for baking a cake.

Many algorithms can be represented with a kind of node-link diagram that on one side specifies the inputs, and on the other side the outputs, with intermediate steps between to move from input to output. The output is a weighted aggregate of the information that went into the algorithm.

When we talk about ‘learning’ in the context of such a system – ‘machine learning’ is a common phrase – a feedback or evaluation loop assesses how successful the algorithms are at matching input to acceptable decision; and the system must be able to modify its algorithms to achieve better matches.

Tamara suggests that at a basic level, we must recognise that humans are the ones feeding training data to the neural network system – texts, images, audio etc. The implication is that the accuracy of machine learning is only as good as the data you give it. If all the ‘dog’ pictures we give it are of Jack Russell terriers, it’s going to struggle at identifying a Labrador as a dog. We should also think about the people who develop these systems – they are hardly a model of diversity, and women and ethnic minorities are under-represented. The cognitive biases of the developer community can influence how machine learning systems are trained, what classifications they are asked to apply, and therefore how they work.

If the system is doing something fairly trivial, such as guessing what word you meant to type when you make a keyboarding mistake, there isn’t much to worry about. But what if the system is deciding whether and on what terms to give us insurance, or a bank loan or mortgage? It is critically important that we know how these systems have been developed, and by whom, to ensure that there are no unfair biases at work.

Tamara said that an ‘AI’ system develops its understanding of the world from the explicit input with which it is fed. She suggested that in contrast, humans make decisions, and act, on the basis of myriad influences of which we are not always aware, and often can’t formulate or quantify. Therefore it is unrealistic, she suggests, to expect an AI to achieve a human subtlety and balance in its .

However, there have been some very promising results using AI in certain decision-making contexts, for example, in detecting certain kinds of disease. In some of these applications, it can be argued that the AI system can sidestep the biases, especially the attentional biases, of humans. But there are also cases where companies have allowed algorithms to act in highly inappropriate and insensitive ways towards individuals.

But perhaps the really big issue is that we really don’t understand what is happening inside these networks – certainly, the really ‘deep learning’ networks where the hidden inner layers shift towards a degree of inner complexity which it is beyond our powers to comprehend. This is an aspect which Stephanie would address.
Stephanie Mathieson is the policy manager at ‘Sense About Science’, a small independent campaigning charity based in London. SAS was set up in 2002 as the media was struggling to cope with science-based topics such as genetic modification in farming, and the alleged link between the MMR vaccine and autism.

SAS works with researchers to help them to communicate better with the public, and has published a number of accessible topic guides, such as ‘Making Sense of Nuclear’, ‘Making Sense of Allergies’ and other titles on forensic genetics, chemical stories in the press, radiation, drug safety etc. They also run a campaign called ‘Ask For Evidence’, equipping people to ask questions about ‘scientific’ claims, perhaps by a politician asking for your vote, or a company for your custom.

But Stephanie’s main focus is around their Evidence In Policy work, examining the role of scientific evidence in government policy formation. A recent SAS report surveyed how transparent twelve government departments are about their use of evidence. The focus is not about the quality of evidence, nor the appropriateness of policies, just on being clear what evidence was taken into account in making those decisions, and how. In talking about the use of Artificial Intelligence in decision support, ‘meaningful transparency’ would be the main concern she would raise.

Sense About Science’s work on algorithms started a couple of years ago, following a lecture by Cory Doctorow, the author of the blog Boing Boing, which raised the question of ‘black box’ decision making in people’s lives. Around the same time, similar concerns were being raised by by the independent investigative newsroom ‘ProPublica’, and Cathy O’Neil’s book ‘Weapons of Math Destruction’. The director of Sense About Science urged Stephanie to read that book, and she heartily recommends it.

There are many parliamentary committees which scrutinise the work of government. The House of Commons Science and Technology Committee has an unusually broad remit. They put out an open call to the public, asking for suggestions for enquiry topics, and Stephanie wrote to suggest the role of algorithms in decision-making. Together with seven or eight others, Stephanie was invited to come and give a presentation, and she persuaded the Committee to launch an enquiry on the issue.

The SciTech Committee’s work was disrupted by the 2016 snap general election, but they pursued the topic, and reported in May 2018. (See https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news-parliament-2017/algorithms-in-decision-making-report-published-17-19-/)

Stephanie then treated us to a version of the ‘pitch’ which she gave to the Committee.

An algorithm is really no more than a set of steps carried out sequentially to give a desired outcome. A cooking recipe, directions for how to get to a place, are everyday examples. Algorithms are everywhere, many implemented by machines, whether controlling the operation of a cash machine or placing your phone call. Algorithms are also behind the analysis of huge amounts of data, carrying out tasks that would be beyond the capacity of humans, efficiently and cheaply, and bringing a great deal of benefit to us. They are generally considered to be objective and impartial.

But in reality, there are troubling issues with algorithms. Quite rapidly, and without debate, they have been engaged to make important decisions about our lives. Such a decision would in the past have been made by a human, and though that person might be following a formulaic procedure, at least you can ask a person to explain what they are doing. What is different about computer algorithms is their potential complexity and ability to be applied at scale; which means, if there are biases ingrained in the algorithm, or in the data selected for them to process, those shortcomings will also be applied at scale, blindly, and inscrutably.

  • In education, algorithms have been used to rank teachers, and in some cases, to summarily sack the ‘lower-performing’ ones.
  • Algorithms generate sentencing guidelines in the criminal justice system, where analysis has found that they are stacked against black people.
  • Algorithms are used to determine credit scores, which in turn determine whether you get a loan, a mortgage, a credit card, even a job.
  • There are companies offering to create a credit score for people who don’t have a credit history, by using ‘proxy data’. They do deep data mining, investigate how people use social media, how they buy stuff online, and other evidences.
  • The adverts you get to see on Google and Facebook are determined through a huge algorithmic trading market.
  • For people working for Uber or Deliveroo, their bosses essentially are algorithms.
  • Algorithms help the Government Digital Service to decide what pages to display on the gov.uk Web site. The significance is, that site is the government’s interface with the public, especially now that individual departments have lost their own Web sites.
  • A recent Government Office for Science report suggests that government is very keen to increase its use of algorithms and Big Data – it calls them ‘data science techniques’ – in deploying resources for health, social care and the emergency services. Algorithms are being used in the fire service to determine which fire stations might be closed.

In China, the government is developing a comprehensive ‘social credit’ system – in truth, a kind of state-run reputation ranking system – where citizens will get merits or demerits for various behaviours. Living in a modestly-sized apartment might add points to your score; paying bills late or posting negative comments online would be penalised. Your score would then determine what resources you will have access to. For example, anyone defaulting on a court-ordered fine will not be allowed to buy first-class rail tickets, or to travel by air, or take a package holiday. That scheme is already in pilots now, and is supposed to be fully rolled out as early as 2020.

(See Wikipedia article at https://en.wikipedia.org/wiki/Social_Credit_System and Wired article at https://www.wired.co.uk/article/china-social-credit.)

Stephanie suggested a closer look at the use of algorithms to rank teacher performance. Surely it is better to do so using an unbiased algorithm? This is what happened in the Washington school district in the USA – an example described in some depth in Cathy O’Neil’s book. At the end of the 2009–2010 school year, all teachers were ranked, largely on the basis of a comparison of their pupils’ test scores between one year and the next. On the basis of this assessment, 2% of teachers were summarily dismissed and a further 5% lost their jobs the following year. But what if the algorithms were misconceived, and the teachers thus victimised were not bad teachers?

In this particular case, one of the fired teachers was rated very highly by her pupils and their parents. There was no way that she could work out the basis of the decision; later it emerged that it turned on this consecutive-year test score proxy, which had not taken into account the baseline performance from which those pupils came into her class.

It cannot be a good thing to have such decisions taken by an opaque process not open to scrutiny and criticism. Cathy O’Neil’s examples have been drawn from the USA, but Stephanie is pleased to note that since the Parliamentary Committee started looking at the effects of algorithms, more British examples have been emerging.

Summary:

  • They are often totally opaque, which makes them unchallengeable. If we don’t know how they are made, how do we know if they are weighted correctly? How do we know if they are fair?
  • Frequently, the decisions turned out by algorithms are not understood by the people who deliver that decision. This may be because a ‘machine learning’ system was involved, such that the intermediate steps between input and output are undiscoverable. Or it may be that the service was bought from a third party. This is what banks do with credit scores – they can tell you Yes or No, they can tell you what your credit score is, but they can’t explain how it was arrived at, and whether the data input was correct.
  • There are things that just can’t be measured with numbers. Consider again that example of teacher rankings; the algorithm just can’t process issues such as how a teacher deals with the difficult issues that pupils bring from their home life, not just the test results.
  • Systems sometimes cannot learn when they are wrong, if there is no mechanism for feedback and course correction.
  • Blind faith in technology can lead to the humans who implement those algorithmically-made decisions failing to take responsibility.
  • The perception that algorithms are unbiased can be unfounded – as Tamara had already explained. When it comes to ‘training’ the system, which data do you include, which do you exclude, and is the data set appropriate? If it was originally collected for another purpose, it may not fit the current one.
  • ‘Success’ can be claimed even when people are having harm done to them. In the public sector, managers may have a sense of problems being ‘fixed’ when teachers are fired. If the objective is to make or save money, and teachers are being fired, and resources saved to be redeployed elsewhere, or profits are being made, it can seem like the model is working. The fact that that objective defined at the start has been met, makes it justify itself. And if we can’t scrutinise or challenge, agree or disagree, we are stuck in that loop.
  • Bias can exist within the data itself. A good example is university admissions, where historical and outdated social norms which we don’t want to see persist, still lurk there. Using historical admissions data as a training data set can entrench bias.
  • Then there is the principle of ‘fairness’. Algorithms consider a slew of statistics, and come out with a probability that someone might be a risky hire, or a bad borrower, or a bad teacher. But is it fair to treat people on the basis of a probability? We have been pooling risk for decades when it comes to insurance cover – as a society we seem happy with that, though we might get annoyed when the premium is decided because of our age rather than our skill in driving. But when sending people to prison, are we happy to tolerate the same level of uncertainty within data? And is past behaviour really a good predictor of future behaviour? Would we as individual be happy to treated on the basis of profiling statistics?
  • Because algorithms are opaque, there is a lot of scope for ‘hokum’. Businesses are employing algorithms; government and its agencies, are buying their services; but if we don’t understand how the decisions are made, there is scope for agencies to be sold these services by snake oil salesmen.

What next?

In the first place, we need to know where algorithms are being used to support decision-making, so we know how to challenge the decision.

When the SciTech committee published its report at the end of May, Stephanie was delighted that they took her suggestion to ask government to publish a list of all public-sector uses of algorithms, and where that use is being planned, where they will affect significant decisions. The Committee also wants government to identify a minister to provide government-wide oversight of such algorithms, where they are being used by the public sector, to co-ordinate departments’ approaches to the development and deployment of algorithms, and such partnerships with the private sector. They also recommended ‘transparency by default’, where algorithms affect the public.

Secondly, we need to ask for the evidence. If we don’t know how these decisions are being made, we don’t know how to challenge them. Whether teacher performance is being ranked, criminals sentenced or services cut, we need to know how those decisions are being made. Organisations should apply standards to their own use of algorithms, and government should be setting the right example. If decision-support algorithms are being used in the public sector, it is so important that people are treated fairly, that someone can be held accountable, and that decisions are transparent, and that hidden prejudice is avoided.

The public sector, because it holds significant datasets, actually holds a lot of power that it doesn’t seem to appreciate. In a couple of cases recently, it’s given data away without demanding transparency in return. A notorious example was the 2016 deal between the Royal Free Hospital and Google DeepMind, to develop algorithms to predict kidney failure, which led to the inappropriate transfer of personal sensitive data.

In the Budget of November 2017, the government announced a new Centre for Data Ethics and Innovation, but it hasn’t really talked about its remit yet. It is consulting on this until September 2018, so maybe by the end of the year we will know something. The SciTech Committee report had lots of strong recommendations for what its remit should be, including evaluation of accountability tools, and examining biases.

The Royal Statistical Society also has a council on data ethics, and the Nuffield Foundation set up a new commission, now the Convention on Data Ethics. Stephanie’s concern is that we now have several different bodies paying attention, but they should all set out their remits to avoid the duplication of work, so we know whose reports to read, and whose recommendations to follow. There needs to be some joined-up thinking, but currently it seems none are listening to each other.

Who might create a clear standard framework for data ethics? Chi Onwurah, the Labour Shadow Minister for Business, Energy and Industrial Strategy, recently said that the role of government is not to regulate every detail, but to set out a vision for the type of society we want, and the principles underlying that. She has also said that we need to debate those principles; once they are clarified, it makes it easier (but not necessarily easy) to have discussions about the standards we need, and how to define them and meet them practically.

Stephanie looks forward to seeing the Government’s response to the Science and Technology Committee’s report – a response which is required by law.

A suggested Code of Conduct came out in late 2016, with five principles for algorithms and their use. They are Responsibility – someone in authority to deal with anything that goes wrong, and in a timely fashion; Explainability – and the new GDPR includes a clause giving a right to explanation, about decisions that have been made about you by algorithms. (Although this is now law, but much will depend on how it is interpreted in the courts.) The remaining three principles are Accuracy, Auditability and Fairness.

So basically, we need to ask questions about the protection of people, and there have to be these points of challenge. Organisations need to ensure mechanisms of recourse, if anything does go wrong, and they should also consider liability. In a recent speakimg engagement on this topic, Stephanie was speaking to a roomful of lawyers, and to them she said, they should not see this as a way to shirk liability, but think about what will happen.

This conversation is at the moment being driven by the autonomous car industry, who are worried about insurance and insurability. When something goes wrong with an algorithm, whose fault might it be? Is it the person who asked for it to be created, and deployed it? The person who designed it? Might something have gone wrong in the Cloud that day, such that a perfectly good algorithm just didn’t work as it was supposed to? ‘People need to get to grips with these liability issues now, otherwise it will be too late, and some individual or group of individuals will get screwed over,’ said Stephanie, ‘while companies try to say that it wasn’t their fault.’

Regulation might not turn out to be the answer. If you do regulate, what do you regulate? The algorithms themselves, similar to the manner in which medicines are scrutinised by the medicines regulator? Or the use of the algorithms? Or the outcomes? Or something else entirely?

Companies like Google, Facebook, Amazon, Microsoft – have they lost the ability to be able to regulate themselves? How are companies regulating themselves? Should companies regulate themselves? Stephanie doesn’t think we can rely on that. Those are some of the questions she put to the audience.

Tamara took back the baton. She noted, we interact extensively with AI though many aspects of our lives. Many jobs that have been thought of as a human preserve, thinking jobs, may become more automated, handled by a computer or neural network. Jobs as we know them now may not be the jobs of the future. Does that mean unemployment, or just a change in the nature of work? It’s likely that in future we will be working side by side with AI on a regular basis. Already, decisions about bank loans, insurance, parole, employment increasingly rely on AI.

As humans, we are used to interacting with each other. How will we interact with non-humans? Specifically, with AI entities? Tamara referenced the famous ‘ELIZA’ experiment conducted 1964–68 by Joseph Weizenbaum, in which a computer program was written to simulate a practitioner of person-centred psychotherapy, communicating with a user via text dialogue. In response to text typed in by the user, the ELIZA program responded with a question, as if trying sympathetically to elicit further explanation or information from the user. This illustrates how we tend to project human qualities onto these non-human systems. (A wealth of other examples are given in Sherry Turkle’s 1984 book, ‘The Second Self’.)

However, sometimes machine/human interactions don’t happen so smoothly. Robotics professor Masahiro Mori studies this in the 1970s, studying people’s reaction to robots made to appear human. Many people responded to such robots with greater warmth as they were made to appear more human, but at a certain point along that transition there was an experience of unease and revulsion which he dubbed the ‘Uncanny Valley’. This is the point when something jarring about the appearance, behaviour or mode of conversation with the artificial human makes you feel uncomfortable and shatters the illusion.

‘Uncanny Valley’ research has been continued since Mori’s original work. It has significance for computer-generated on-screen avatars, and CGI characters in movies. A useful discussion of this phenomenon can be found in the Wikipedia article at https://en.wikipedia.org/wiki/Uncanny_valley

There is a Virtual Personal Assistant service for iOS devices, called ‘Fin’, which Tamara referenced (see https://www.fin.com). Combining an iOS app with a cloud-based computation service, ‘Fin’ avoids some of the risk of Uncanny Valley by interacting purely through voice command and on-screen text response. Is that how people might feel comfortable interacting with an AI? Or would people prefer something that attempts to represent a human presence?

Clare Parry remarked that she had been at an event about care robots, where you don’t get an Uncanny Valley effect because despite a broadly humanoid form, they are obviously robots. Clare also thought that although robots (including autonomous cars) might do bad things, they aren’t going to do the kind of bad things that humans do, and machines do some things better than people do. An autonomous car doesn’t get drunk or suffer from road-rage…

Tamara concluded by observing that our interactions with these systems shapes how we behave. This is not a new thing – we have always been shaped by the systems and the tools that we create. The printing press moved us from an oral/social method of sharing stories, to a more individual experience, which arguably has made us more individualistic as a society. Perhaps our interactions with AI will shape us similarly, and we should stop and think about the implications for society. Will a partnership with AI bring out the best of our humanity, or make us more machine-like?

Tamara would prefer us not to think of Artificial Intelligence as a reified machine system, but of Intelligence Augmented, shifting the focus of discussion onto how these systems can help us flourish. And who are the people that need that help the most? Can we use these systems to deal with the big problems we face, such as poverty, climate change, disease and others? How can we integrate these computational assistances to help us make the best of what makes us human?

There was so much food for thought in the lectures that everyone was happy to talk together in the final discussion and the chat over refreshments that followed.  We could campaign to say, ‘We’ve got to understand the algorithms, we’ve got to have them documented’, but perhaps there are certain kinds of AI practice (such as those involved in medical diagnosis from imaging input) where it is just not going to be possible.

From a blog by Conrad Taylor, June 2018

Some suggested reading

 

 

July 2018 Seminar: Machines and Morality: Can AI be Ethical?

Summary

At this meeting Stephanie Mathisen, Policy Manager at Sense About Science, and Tamara Ansons, Behavioural Science Consultant at Ipsos, addressed the question of the Ethics of Artificial Intelligence – is it possible for machines to have morality?

Speakers

Dr Tamara L Ansons is an expert on behavioural science. After receiving her PhD in Brain and Cognitive Sciences from the University of Manitoba, she did a post-doc in Marketing at the University of Michigan and then worked as an Assistant Professor of Marketing at Warwick Business School before moving to LSE to manage their Behavioural Research Lab. Her academic research focused on examining how subtle cognitive processes and contextual or situational factors non-consciously alter how individuals form judgments and behave. Much of this work has focused on how cognitive psychology can be applied to provide a deeper understanding of our interactions with technology – from online search behaviour, to social media and immersive technologies. She has published her research across a range of academic journals and books, and presented her research at many international conferences. At Ipsos Tamara is drawing on her expertise to translate academic research into scalable business practices. Recent projects that she has contributed to while at Ipsos include: Using goal setting and technology to increase physical activity in a healthcare community; Examining the psychology of technology adoption; Applying behavioural science to optimise digital experiences; Developing a model of behaviour change to better understand the barriers and enablers of secure cyber behaviour.

Dr Stephanie Mathisen is policy manager at Sense about Science, an independent charity that ensures the public interest in sound science and evidence is recognised in public debates and policymaking. Steph has just organised the first ever Evidence Week in the UK parliament, which took place 25–28 June this year. Steph works on transparency about evidence in policy and decision-making, including assessing the UK government’s performance on that front. She submits evidence to parliamentary inquiries and coordinates Sense about Science’s continuing role in the Libel Reform Campaign. In February 2017, Steph persuaded the House of Commons science and technology committee to launch an inquiry into the use of algorithms in decision- making.

Time and Venue

2pm on 26th July 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

The speakers at this meeting will be addressing the question of the Ethics of Artificial Intelligence – is is it possible for machines to have morality? And to do this, they’ll be unpacking the hype currently surrounding the subject of AI – how much of it is justified, and how do they see these new technologies influencing human society over the coming decades? The potential of AI and its many applications needs little to spark enthusiastic intrigue and adoption. For example, when it comes to managing customer experiences, Gartner estimates that 85% of customer interactions will be managed without humans by 2020.

However, as we plough ahead with the adoption of AI, it hasn’t taken long to realise that incorporating AI into our lives needs to be handled with a careful, measured approach. Indeed, unpacking AI’s integration into our lives provides us with an opportunity – and responsibility – to ensure AI brings out the best of our humanness while mitigating our shortcomings. It is through a careful integration that the promise of AI and us can be realised to address the big challenges we face.

Tamara Ansons will look at:
•Human input in the creating of AI (relating to the coders and to AI training)
•AI and measurement (spinning off from the previous point is how AI guides our focus to the specific/measurable)
•Humanising technology (where we do humanise and where some barriers exist)

Stephanie Mathisen will address the importance of:
•Meaningful transparency around algorithms used in decision-making processes (to challenge or agree; fairness)
•Scrutiny
•Accountability

Slides

No slides available for this presentation

Tweets

#netikx93

Blog

See our blog report: Machines and Morality: Can AI be Ethical?

 

Study Suggestions

The SciTech Committee can be found here

June 2018 Seminar: Organising Medical and Health-related Information

Summary

At this meeting held in Leeds, Ewan Davis, one of the first developers of a GP computerised health record system, discussed Electronic Health Records. This is a joint meeting with ISKO UK.

Speakers

Ewan Davis was one of the first developers of a GP computerised health record system. His background is solidly in Health Informatics and more recently he has been championing two things: the use of apps on handheld devices to support medical staff, patients and carers, and the use of open (non-proprietary) standards and information exchange formats in health informatics. Indeed, he is not long back from a launch in Plymouth by the local NHS trust of an integration system based on the OpenEHR standard – see https://en.wikipedia.org/wiki/OpenEHR. We also hope to have a second speaker on other aspects of EHR.

Time and Venue

2pm on 7th June 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

This meeting, NetIKX’s first outside London for several years, will focus on health-related information. The main speakers will be Ewan Davis, who has pioneered Electronic Health Records (EHR) and, in particular, the relationship between clinical EHR (prepared by medical professionals), Personal Health Records (PHR), which are managed by individuals themselves, and Co-produced PHR, which is a proposal for a hybrid between these two types of record.

Slides

No slides available for this presentation

Tweets

Due to a power cut there were no tweets from this event

Blog

See our blog report: Organising Medical and Health Related Information

Study Suggestions

Our partner organisation can be found here

Titus Oates claims that there is a Catholic plot against the King’s life

May 2018 Seminar: Trust and integrity in information

Summary

The question of how we identify trustworthy sources of information formed the basis of this varied and thought-provoking seminar. Hanna Chalmers, Senior Director of IPSOS Mori, detailed the initial results of a recent poll on trust in the media. Events such as the Cambridge Analytica scandal have resulted in a general sense that trust in the media is in a state of crisis. Hanna suggested that it is more accurate to talk of trust in the media as being in flux, rather than in crisis. Dr Brennan Jacoby from Philosophy at Work, approached the topic of trust from a different angle – what do we mean by trust? The element of vulnerability is what distinguishes trust from mere reliance: when we trust, we risk being betrayed. This resulted in a fascinating discussion with practical audience suggestions.

Speakers

Hanna Chalmers is a media research expert, having worked client side at the BBC and Universal Music before moving agency side with a stint at IPG agency Initiative and joining Ipsos as a senior director in the quality team just under three years ago. Hanna works across a broad range of media and tech clients exploring areas that are often high up the political agenda. Hanna has been looking at trust in media over the last year and is delighted to be showcasing some of the most recent findings of a global survey looking at our relationship with trust and the media around the world.
Dr Brennan Jacoby is the founder of Philosophy at Work. A philosophy PhD, Brennan has spent the last 6 years helping businesses address their most important issues. While he specialises on bringing a thoughtful approach to a range of topics from resilience, communication, innovation and leadership, his PhD analysed trust, and he has written, presented and trained widely on the topic of trustworthiness and how to build trust. Recent organisations he has worked with include: Deloitte, Media Arts Lab and Viacom. Website: https://philosophyatwork.co.uk/dr-brennan-jacoby/

Time and Venue

2pm on 24th May 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

In this new media age, the flow of information is often much faster than our ability to absorb and criticise it. This poses a whole set of problems for us individually, and in our organisations and social groupings, especially as important decisions with practical consequences are often made on the basis of our possibly ill-informed judgements. There is currently a huge interest in the area of ‘fake news’ and ‘alternative facts’ and other ‘post truth’ information disorders circulating in the traditional and social media, and it is appropriate for us as Knowledge and Information Professionals to be able to operate successfully in this increasingly difficult environment, and provide expertise in information literacy and fact-checking to bring to our workplaces.

Slides

No slides available for this presentation

Tweets

#netikx91

Blog

See our blog report: Trust and Integrity in Information

Study Suggestions

Article on Global Trust in Media: https://digiday.com/media/global-state-trust-media-5-charts/

March 2018 Seminar: Working in Complexity – SenseMaker, Decisions and Cynefin

Summary

At this meeting attendees were given the opportunity to take part in experimenting with a number of tools and analytical approaches that have been used to good effect in dealing with intractable, complex problems.  It was a lively action-packed meeting with useful learning for Monday morning, and plenty of opportunity for networking and exchanging ideas and experience across organisations.

Speaker

Tony Quinlan is an independent consultant and a member of the Cognitive Edge network of practitioners founded by Dave Snowden in 2004.  As a co-trainer with Dave, Tony has worked internationally, teaching techniques for addressing complexity to a variety of organisations.
Tony has used SenseMaker® in over 50 projects in the past decade, including in Europe, Asia, Africa and Latin America. He has helped organisations such as the European Commission, United Nations Development Programme and various UK government departments work with the Cynefin framework since 2005. This mix gives him a unique combination of theoretical foundations and practical field experience.
Tony blogs at https://narrate.co.uk/news/

Time and Location

2pm on 7th March 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

In complex, uncertain and dynamically changing situations, there is a need for good, context-heavy and up-to-date information which decision-makers can access fast. The traditional approaches – such as questionnaires, citizen polling, employment engagement surveys and patient focus groups – have all had limited success in meeting that need, and they are failing to support decision- makers with appropriate strategies to deal with the inherent uncertainty of complexity.

This NetIKX seminar session will give attendees the opportunity take part in experimenting with a number of tools and analytical approaches that have been used to good effect in dealing with intractable, complex problems. In particular we will look at:

  • SenseMaker® narrative research methods, and the software tools that support them;
  • the Cynefin framework for analysing complexity;
  • ways of co-creating projects to address these

The afternoon will be interactive from the beginning – including an ‘acoustic SenseMaker®’ exercise, along with examples from various organisations; an explanation of the underlying principles; and how to make best use of these methods to intervene in evolving situations and to obtain desirable outcomes.

Time allowing, the afternoon will also include discussion and exercises around how this approach can be combined with the Cynefin framework to improve organisational resilience and decision-making. A pdf giving detail of the meeting is available at Working in Complexity

Slides

No slides available for this presentation

Tweets

#netikx90

Blog

See our blog report: Working in Complexity.

Study resources

Tony Quinlan suggests this website: https://narrate.co.uk/

January 2018 Seminar: Making true connections in a complex world: new technologies to link facts, concepts and data

Summary

At this meeting new approaches to Linked Data and Graph Technology were presented and discussed. Dion Lindsay introduced the New Graph Technology of Information and David Clarke discussed Building Rich Search and Discovery User Experiences with Linked Open Data.

Speakers

Dion Lindsay
Introducing the New Graph Technology of Information. Graph technology is a rapidly growing method of making complex datasets visually engaging and explorable in new ways, revealing hidden patterns and creating actionable insights. Graph technology is being applied to the vast and unruly sets of unstructured data, with which traditional relational database technology has not been able to come to terms, but which enterprises own and are anxious to exploit.

David Clarke
Building Rich Search and Discovery User Experiences with Linked Open Data This presentation will demonstrate how to leverage Linked Open Data for search and discovery applications. The Linked Open Data cloud is a rapidly growing collection of publicly accessible
resources, which can be adopted and reused to enrich both internal enterprise projects and
public-facing information systems. Linked Open Data resources live in graph databases, formatted as RDF triple stores. Two use-cases will be explored.

Time and Venue

2pm on 25th January 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

NetIKX offers KM and IM professions a chance to increase our understanding of the new technology approaches that are changing and challenging our work. Our next seminar will give you a chance to confidently discuss and assess the opportunities of new approaches to Linked Data and Graph Technology that can enhance your work and your organisational value.
In everyday language, a ‘graph’ is a visual representation of quantitative data. But in computing and information management, the word can also refer to a data structure in which entities are considered as nodes in a network diagram, with links (relationships) between some of them.
Both the entities and the relationships can also be recorded as having ‘properties’ or ‘attributes’, quantitative and qualitative.

Slides

No slides available for this presentation

Tweets

#netikx89

Blog

Blog link
See our blog report: Making True Connections

 

Study Suggestions

The Neo4j team produced a book by Ian Robinson, Jim Webber and Emil Eifrem called ‘Graph Databases’, and it is available for free (PDF, Kindle etc) from https://neo4j.com/graph-databases-book/

November 2017 Seminar: The Future for Information and Knowledge Professionals – Tenth Anniversary Seminar

Summary

2017 has been the tenth anniversary of the founding of NetIKX and this meeting was a celebration of this. The programme focused on the situation of knowledge and information professionals in 2017. Talks to set the scene were from Peter Thomson on major changes to the world of work and from Stuart Ward, Chair of NetIKX at its inception, who focused more closely on how KM and IM people can provide value in the workplace in this changing world. Then participants were invited to discuss the key ideas that they thought were the most relevant and put questions to a panel composed of people active and influential in our field.

What are the important trends in employment that we face and what is the role of communities like NetIKX that operate in this field? We looked back over the last ten years to set the scene for the changes we need to prepare for in the coming years. We also involved people from related organisations such as CILIP. ISKO UK and LIKE.
There were two introductory talks and first Peter Thomson looked at major changes to the world of work. Stuart Ward, Chair of NetIKX at its inception, focused more closely on how KM and IM people can provide value in the workplace in this changing world. Then, in a usual NetIKX syndicate session, participants were invited to discuss the key ideas that they thought were the most relevant. After this, to gain a wider perspective, questions based on these discussions were put to a panel composed of people active and influential in our field. These were David Haynes (Chair of ISKO UK), David Gurteen, David Smith (Government KIM Head of Profession), Karen McFarlane (Chair of the CILIP Board), Steve Dale and Noeleen Schenk (Metataxis Ltd, who has also been running a series of meetings on the future of knowledge and information management).

After a lively panel Q and A session, there was time for further discussion and networking over generous celebratory refreshments.

Speakers

Peter Thomson is an expert on the changing world of work and its impact on organisations, leadership and management. He regularly speaks on this topic at conferences and has worked with many groups of senior managers to inspire them to change their organisational culture. He headed up the HR function for Digital Equipment for Northern Europe for 18 years leading up to the dawn of the Internet. On leaving DEC, Peter founded the Future Work Forum at Henley Business School. He was Director of the Forum for 16 years, during which time he studied the changing patterns of work and the leadership implications of these trends. At the same time he formed Wisework Ltd, now a leading consultancy in the field of smart working. Peter is co-author, with Alison Maitland, of the business bestseller Future Work. He is also editor of a new book Conquering Digital Overload, which is about to be published. As a consultant and coach, he works with leadership teams and individuals to help them gain the maximum business benefit from new working practices. As a writer and researcher he is fascinated by the evolving role of leadership and management as we move into the ‘Gig Economy’.

Stuart Ward has been involved with NetIKX and its predecessors for over 15 years. With others he launched NetIKX 10 years ago and was the first Chairman. Stuart has wide experience in information and knowledge management and ICT, gained in business and as an independent consultant; he is interested in strategies that help to maximise the value of knowledge and information for organisations. Stuart began his career in IT and project management and, after developing a keen interest in improving the use of information in organisations, he became Director of Information Management at British Energy. In 1997 he established Forward Consulting to help organisations improve performance through information and knowledge management. He has worked with clients in both the public and private sectors. As an Associate of the IMPACT Programme, he managed their Information and Knowledge Exploitation Group from 1997 to 1999 and then again from 2004 to 2006. He was instrumental in developing the theme of the Hawley Committee: Information as an Asset with practical tools for use in business. In previous roles, Stuart has been a visiting lecturer at City University, Chairman of the Judging Panel for the British Computer Society Annual Business Achievement Awards, and chaired conference organising committees for Aslib. He is also currently an Associate of the College of Policing.

Time and Venue

2pm on Thursday 16 November, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Slides

Not available

Tweets

#netikx88

Blog

See our blog report: The Future of Work for Information and Knowledge Professionals

Study Suggestions

None