In discussions of AI, one issue that is often raised is that of the ‘black box’ problem, where we cannot know how a machine system comes to its decisions and recommendations. That is particularly true of the class of self-training ‘deep machine learning’ systems which have been making the headlines in recent medical research.
Dr Tamara Ansons has a background in Cognitive Psychology and works for Ipsos MORI, applying academic research, principally from psychology, to various client-serving projects. In her PhD work, she looked at memory and how it influences decision-making; in the course of that, she investigated neural networks, as a form of representation for how memory stores and uses information.
At our NetIKX seminar for July 2018, she observed that ‘Artificial Intelligence’ is being used across a range of purposes that affect our lives, from mundane to highly significant. Recently, she thinks, the technology has been developing so fast that we have not been stepping back enough to think about the implications properly.
Tamara displayed an amusing image, an array of small photos of round light-brown objects, each one marked with three dark patches. Some were photos of chihuahua puppies, and the others were muffins with three raisins on top! People can easily distinguish between a dog and a muffin, a raisin and an eye or doggy nose. But for a computing system, such tasks are fairly difficult. Given the discrepancy in capability, how confident should we feel about handing over decisions with moral consequences to these machines?
Tamara stated that the ideas behind neural networks have emerged from cognitive psychology, from a belief that how we learn and understand information is through a network of interconnected concepts. She illustrated this with diagrams in which one concept, ‘dog’, was connected to others such as ‘tail’, ‘has fur’, ‘barks’ [but note, there are dogs without fur and dogs that don’t bark]. From a ‘connectionist’ view, our understanding of what a dog is, is based around these features of identity, and how they are represented in our cognitive system. In cognitive psychology, there is a debate between this view and a ‘symbolist’ interpretation, which says that we don’t necessarily abstract from finer feature details, but process information more as a whole.
This connectionist model of mental activity, said Tamara, can be useful in approaching some specialist tasks. Suppose you are developing skill at a task that presents itself to you frequently – putting a tyre on a wheel, gutting fish, sewing a hem, planning wood. We can think of the cognitive system as having component elements that, with practice and through re-inforcement, become more strongly associated with each other, such that one becomes better at doing that task.
Humans tend to have a fairly good task-specific ability. We learn new tasks well, and our performance improves with practice. But does this encapsulate what it means to be intelligent? Human intelligence is not just characterised by ability to do certain tasks well. Tamara argued that what makes humans unique is our adaptability, the ability to learnings from one context and applying them imaginatively to another. And humans don’t have to learn something over many, many trials. We can learn from a single significant event.
An algorithm is a set of rules which specify how certain bits of information are combined in a stepwise process. As an example, Tamara suggested a recipe for baking a cake.
Many algorithms can be represented with a kind of node-link diagram that on one side specifies the inputs, and on the other side the outputs, with intermediate steps between to move from input to output. The output is a weighted aggregate of the information that went into the algorithm.
When we talk about ‘learning’ in the context of such a system – ‘machine learning’ is a common phrase – a feedback or evaluation loop assesses how successful the algorithms are at matching input to acceptable decision; and the system must be able to modify its algorithms to achieve better matches.
Tamara suggests that at a basic level, we must recognise that humans are the ones feeding training data to the neural network system – texts, images, audio etc. The implication is that the accuracy of machine learning is only as good as the data you give it. If all the ‘dog’ pictures we give it are of Jack Russell terriers, it’s going to struggle at identifying a Labrador as a dog. We should also think about the people who develop these systems – they are hardly a model of diversity, and women and ethnic minorities are under-represented. The cognitive biases of the developer community can influence how machine learning systems are trained, what classifications they are asked to apply, and therefore how they work.
If the system is doing something fairly trivial, such as guessing what word you meant to type when you make a keyboarding mistake, there isn’t much to worry about. But what if the system is deciding whether and on what terms to give us insurance, or a bank loan or mortgage? It is critically important that we know how these systems have been developed, and by whom, to ensure that there are no unfair biases at work.
Tamara said that an ‘AI’ system develops its understanding of the world from the explicit input with which it is fed. She suggested that in contrast, humans make decisions, and act, on the basis of myriad influences of which we are not always aware, and often can’t formulate or quantify. Therefore it is unrealistic, she suggests, to expect an AI to achieve a human subtlety and balance in its .
However, there have been some very promising results using AI in certain decision-making contexts, for example, in detecting certain kinds of disease. In some of these applications, it can be argued that the AI system can sidestep the biases, especially the attentional biases, of humans. But there are also cases where companies have allowed algorithms to act in highly inappropriate and insensitive ways towards individuals.
But perhaps the really big issue is that we really don’t understand what is happening inside these networks – certainly, the really ‘deep learning’ networks where the hidden inner layers shift towards a degree of inner complexity which it is beyond our powers to comprehend. This is an aspect which Stephanie would address.
Stephanie Mathieson is the policy manager at ‘Sense About Science’, a small independent campaigning charity based in London. SAS was set up in 2002 as the media was struggling to cope with science-based topics such as genetic modification in farming, and the alleged link between the MMR vaccine and autism.
SAS works with researchers to help them to communicate better with the public, and has published a number of accessible topic guides, such as ‘Making Sense of Nuclear’, ‘Making Sense of Allergies’ and other titles on forensic genetics, chemical stories in the press, radiation, drug safety etc. They also run a campaign called ‘Ask For Evidence’, equipping people to ask questions about ‘scientific’ claims, perhaps by a politician asking for your vote, or a company for your custom.
But Stephanie’s main focus is around their Evidence In Policy work, examining the role of scientific evidence in government policy formation. A recent SAS report surveyed how transparent twelve government departments are about their use of evidence. The focus is not about the quality of evidence, nor the appropriateness of policies, just on being clear what evidence was taken into account in making those decisions, and how. In talking about the use of Artificial Intelligence in decision support, ‘meaningful transparency’ would be the main concern she would raise.
Sense About Science’s work on algorithms started a couple of years ago, following a lecture by Cory Doctorow, the author of the blog Boing Boing, which raised the question of ‘black box’ decision making in people’s lives. Around the same time, similar concerns were being raised by by the independent investigative newsroom ‘ProPublica’, and Cathy O’Neil’s book ‘Weapons of Math Destruction’. The director of Sense About Science urged Stephanie to read that book, and she heartily recommends it.
There are many parliamentary committees which scrutinise the work of government. The House of Commons Science and Technology Committee has an unusually broad remit. They put out an open call to the public, asking for suggestions for enquiry topics, and Stephanie wrote to suggest the role of algorithms in decision-making. Together with seven or eight others, Stephanie was invited to come and give a presentation, and she persuaded the Committee to launch an enquiry on the issue.
The SciTech Committee’s work was disrupted by the 2016 snap general election, but they pursued the topic, and reported in May 2018. (See https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news-parliament-2017/algorithms-in-decision-making-report-published-17-19-/)
Stephanie then treated us to a version of the ‘pitch’ which she gave to the Committee.
An algorithm is really no more than a set of steps carried out sequentially to give a desired outcome. A cooking recipe, directions for how to get to a place, are everyday examples. Algorithms are everywhere, many implemented by machines, whether controlling the operation of a cash machine or placing your phone call. Algorithms are also behind the analysis of huge amounts of data, carrying out tasks that would be beyond the capacity of humans, efficiently and cheaply, and bringing a great deal of benefit to us. They are generally considered to be objective and impartial.
But in reality, there are troubling issues with algorithms. Quite rapidly, and without debate, they have been engaged to make important decisions about our lives. Such a decision would in the past have been made by a human, and though that person might be following a formulaic procedure, at least you can ask a person to explain what they are doing. What is different about computer algorithms is their potential complexity and ability to be applied at scale; which means, if there are biases ingrained in the algorithm, or in the data selected for them to process, those shortcomings will also be applied at scale, blindly, and inscrutably.
- In education, algorithms have been used to rank teachers, and in some cases, to summarily sack the ‘lower-performing’ ones.
- Algorithms generate sentencing guidelines in the criminal justice system, where analysis has found that they are stacked against black people.
- Algorithms are used to determine credit scores, which in turn determine whether you get a loan, a mortgage, a credit card, even a job.
- There are companies offering to create a credit score for people who don’t have a credit history, by using ‘proxy data’. They do deep data mining, investigate how people use social media, how they buy stuff online, and other evidences.
- The adverts you get to see on Google and Facebook are determined through a huge algorithmic trading market.
- For people working for Uber or Deliveroo, their bosses essentially are algorithms.
- Algorithms help the Government Digital Service to decide what pages to display on the gov.uk Web site. The significance is, that site is the government’s interface with the public, especially now that individual departments have lost their own Web sites.
- A recent Government Office for Science report suggests that government is very keen to increase its use of algorithms and Big Data – it calls them ‘data science techniques’ – in deploying resources for health, social care and the emergency services. Algorithms are being used in the fire service to determine which fire stations might be closed.
In China, the government is developing a comprehensive ‘social credit’ system – in truth, a kind of state-run reputation ranking system – where citizens will get merits or demerits for various behaviours. Living in a modestly-sized apartment might add points to your score; paying bills late or posting negative comments online would be penalised. Your score would then determine what resources you will have access to. For example, anyone defaulting on a court-ordered fine will not be allowed to buy first-class rail tickets, or to travel by air, or take a package holiday. That scheme is already in pilots now, and is supposed to be fully rolled out as early as 2020.
(See Wikipedia article at https://en.wikipedia.org/wiki/Social_Credit_System and Wired article at https://www.wired.co.uk/article/china-social-credit.)
Stephanie suggested a closer look at the use of algorithms to rank teacher performance. Surely it is better to do so using an unbiased algorithm? This is what happened in the Washington school district in the USA – an example described in some depth in Cathy O’Neil’s book. At the end of the 2009–2010 school year, all teachers were ranked, largely on the basis of a comparison of their pupils’ test scores between one year and the next. On the basis of this assessment, 2% of teachers were summarily dismissed and a further 5% lost their jobs the following year. But what if the algorithms were misconceived, and the teachers thus victimised were not bad teachers?
In this particular case, one of the fired teachers was rated very highly by her pupils and their parents. There was no way that she could work out the basis of the decision; later it emerged that it turned on this consecutive-year test score proxy, which had not taken into account the baseline performance from which those pupils came into her class.
It cannot be a good thing to have such decisions taken by an opaque process not open to scrutiny and criticism. Cathy O’Neil’s examples have been drawn from the USA, but Stephanie is pleased to note that since the Parliamentary Committee started looking at the effects of algorithms, more British examples have been emerging.
Summary:
- They are often totally opaque, which makes them unchallengeable. If we don’t know how they are made, how do we know if they are weighted correctly? How do we know if they are fair?
- Frequently, the decisions turned out by algorithms are not understood by the people who deliver that decision. This may be because a ‘machine learning’ system was involved, such that the intermediate steps between input and output are undiscoverable. Or it may be that the service was bought from a third party. This is what banks do with credit scores – they can tell you Yes or No, they can tell you what your credit score is, but they can’t explain how it was arrived at, and whether the data input was correct.
- There are things that just can’t be measured with numbers. Consider again that example of teacher rankings; the algorithm just can’t process issues such as how a teacher deals with the difficult issues that pupils bring from their home life, not just the test results.
- Systems sometimes cannot learn when they are wrong, if there is no mechanism for feedback and course correction.
- Blind faith in technology can lead to the humans who implement those algorithmically-made decisions failing to take responsibility.
- The perception that algorithms are unbiased can be unfounded – as Tamara had already explained. When it comes to ‘training’ the system, which data do you include, which do you exclude, and is the data set appropriate? If it was originally collected for another purpose, it may not fit the current one.
- ‘Success’ can be claimed even when people are having harm done to them. In the public sector, managers may have a sense of problems being ‘fixed’ when teachers are fired. If the objective is to make or save money, and teachers are being fired, and resources saved to be redeployed elsewhere, or profits are being made, it can seem like the model is working. The fact that that objective defined at the start has been met, makes it justify itself. And if we can’t scrutinise or challenge, agree or disagree, we are stuck in that loop.
- Bias can exist within the data itself. A good example is university admissions, where historical and outdated social norms which we don’t want to see persist, still lurk there. Using historical admissions data as a training data set can entrench bias.
- Then there is the principle of ‘fairness’. Algorithms consider a slew of statistics, and come out with a probability that someone might be a risky hire, or a bad borrower, or a bad teacher. But is it fair to treat people on the basis of a probability? We have been pooling risk for decades when it comes to insurance cover – as a society we seem happy with that, though we might get annoyed when the premium is decided because of our age rather than our skill in driving. But when sending people to prison, are we happy to tolerate the same level of uncertainty within data? And is past behaviour really a good predictor of future behaviour? Would we as individual be happy to treated on the basis of profiling statistics?
- Because algorithms are opaque, there is a lot of scope for ‘hokum’. Businesses are employing algorithms; government and its agencies, are buying their services; but if we don’t understand how the decisions are made, there is scope for agencies to be sold these services by snake oil salesmen.
What next?
In the first place, we need to know where algorithms are being used to support decision-making, so we know how to challenge the decision.
When the SciTech committee published its report at the end of May, Stephanie was delighted that they took her suggestion to ask government to publish a list of all public-sector uses of algorithms, and where that use is being planned, where they will affect significant decisions. The Committee also wants government to identify a minister to provide government-wide oversight of such algorithms, where they are being used by the public sector, to co-ordinate departments’ approaches to the development and deployment of algorithms, and such partnerships with the private sector. They also recommended ‘transparency by default’, where algorithms affect the public.
Secondly, we need to ask for the evidence. If we don’t know how these decisions are being made, we don’t know how to challenge them. Whether teacher performance is being ranked, criminals sentenced or services cut, we need to know how those decisions are being made. Organisations should apply standards to their own use of algorithms, and government should be setting the right example. If decision-support algorithms are being used in the public sector, it is so important that people are treated fairly, that someone can be held accountable, and that decisions are transparent, and that hidden prejudice is avoided.
The public sector, because it holds significant datasets, actually holds a lot of power that it doesn’t seem to appreciate. In a couple of cases recently, it’s given data away without demanding transparency in return. A notorious example was the 2016 deal between the Royal Free Hospital and Google DeepMind, to develop algorithms to predict kidney failure, which led to the inappropriate transfer of personal sensitive data.
In the Budget of November 2017, the government announced a new Centre for Data Ethics and Innovation, but it hasn’t really talked about its remit yet. It is consulting on this until September 2018, so maybe by the end of the year we will know something. The SciTech Committee report had lots of strong recommendations for what its remit should be, including evaluation of accountability tools, and examining biases.
The Royal Statistical Society also has a council on data ethics, and the Nuffield Foundation set up a new commission, now the Convention on Data Ethics. Stephanie’s concern is that we now have several different bodies paying attention, but they should all set out their remits to avoid the duplication of work, so we know whose reports to read, and whose recommendations to follow. There needs to be some joined-up thinking, but currently it seems none are listening to each other.
Who might create a clear standard framework for data ethics? Chi Onwurah, the Labour Shadow Minister for Business, Energy and Industrial Strategy, recently said that the role of government is not to regulate every detail, but to set out a vision for the type of society we want, and the principles underlying that. She has also said that we need to debate those principles; once they are clarified, it makes it easier (but not necessarily easy) to have discussions about the standards we need, and how to define them and meet them practically.
Stephanie looks forward to seeing the Government’s response to the Science and Technology Committee’s report – a response which is required by law.
A suggested Code of Conduct came out in late 2016, with five principles for algorithms and their use. They are Responsibility – someone in authority to deal with anything that goes wrong, and in a timely fashion; Explainability – and the new GDPR includes a clause giving a right to explanation, about decisions that have been made about you by algorithms. (Although this is now law, but much will depend on how it is interpreted in the courts.) The remaining three principles are Accuracy, Auditability and Fairness.
So basically, we need to ask questions about the protection of people, and there have to be these points of challenge. Organisations need to ensure mechanisms of recourse, if anything does go wrong, and they should also consider liability. In a recent speakimg engagement on this topic, Stephanie was speaking to a roomful of lawyers, and to them she said, they should not see this as a way to shirk liability, but think about what will happen.
This conversation is at the moment being driven by the autonomous car industry, who are worried about insurance and insurability. When something goes wrong with an algorithm, whose fault might it be? Is it the person who asked for it to be created, and deployed it? The person who designed it? Might something have gone wrong in the Cloud that day, such that a perfectly good algorithm just didn’t work as it was supposed to? ‘People need to get to grips with these liability issues now, otherwise it will be too late, and some individual or group of individuals will get screwed over,’ said Stephanie, ‘while companies try to say that it wasn’t their fault.’
Regulation might not turn out to be the answer. If you do regulate, what do you regulate? The algorithms themselves, similar to the manner in which medicines are scrutinised by the medicines regulator? Or the use of the algorithms? Or the outcomes? Or something else entirely?
Companies like Google, Facebook, Amazon, Microsoft – have they lost the ability to be able to regulate themselves? How are companies regulating themselves? Should companies regulate themselves? Stephanie doesn’t think we can rely on that. Those are some of the questions she put to the audience.
Tamara took back the baton. She noted, we interact extensively with AI though many aspects of our lives. Many jobs that have been thought of as a human preserve, thinking jobs, may become more automated, handled by a computer or neural network. Jobs as we know them now may not be the jobs of the future. Does that mean unemployment, or just a change in the nature of work? It’s likely that in future we will be working side by side with AI on a regular basis. Already, decisions about bank loans, insurance, parole, employment increasingly rely on AI.
As humans, we are used to interacting with each other. How will we interact with non-humans? Specifically, with AI entities? Tamara referenced the famous ‘ELIZA’ experiment conducted 1964–68 by Joseph Weizenbaum, in which a computer program was written to simulate a practitioner of person-centred psychotherapy, communicating with a user via text dialogue. In response to text typed in by the user, the ELIZA program responded with a question, as if trying sympathetically to elicit further explanation or information from the user. This illustrates how we tend to project human qualities onto these non-human systems. (A wealth of other examples are given in Sherry Turkle’s 1984 book, ‘The Second Self’.)
However, sometimes machine/human interactions don’t happen so smoothly. Robotics professor Masahiro Mori studies this in the 1970s, studying people’s reaction to robots made to appear human. Many people responded to such robots with greater warmth as they were made to appear more human, but at a certain point along that transition there was an experience of unease and revulsion which he dubbed the ‘Uncanny Valley’. This is the point when something jarring about the appearance, behaviour or mode of conversation with the artificial human makes you feel uncomfortable and shatters the illusion.
‘Uncanny Valley’ research has been continued since Mori’s original work. It has significance for computer-generated on-screen avatars, and CGI characters in movies. A useful discussion of this phenomenon can be found in the Wikipedia article at https://en.wikipedia.org/wiki/Uncanny_valley
There is a Virtual Personal Assistant service for iOS devices, called ‘Fin’, which Tamara referenced (see https://www.fin.com). Combining an iOS app with a cloud-based computation service, ‘Fin’ avoids some of the risk of Uncanny Valley by interacting purely through voice command and on-screen text response. Is that how people might feel comfortable interacting with an AI? Or would people prefer something that attempts to represent a human presence?
Clare Parry remarked that she had been at an event about care robots, where you don’t get an Uncanny Valley effect because despite a broadly humanoid form, they are obviously robots. Clare also thought that although robots (including autonomous cars) might do bad things, they aren’t going to do the kind of bad things that humans do, and machines do some things better than people do. An autonomous car doesn’t get drunk or suffer from road-rage…
Tamara concluded by observing that our interactions with these systems shapes how we behave. This is not a new thing – we have always been shaped by the systems and the tools that we create. The printing press moved us from an oral/social method of sharing stories, to a more individual experience, which arguably has made us more individualistic as a society. Perhaps our interactions with AI will shape us similarly, and we should stop and think about the implications for society. Will a partnership with AI bring out the best of our humanity, or make us more machine-like?
Tamara would prefer us not to think of Artificial Intelligence as a reified machine system, but of Intelligence Augmented, shifting the focus of discussion onto how these systems can help us flourish. And who are the people that need that help the most? Can we use these systems to deal with the big problems we face, such as poverty, climate change, disease and others? How can we integrate these computational assistances to help us make the best of what makes us human?
There was so much food for thought in the lectures that everyone was happy to talk together in the final discussion and the chat over refreshments that followed. We could campaign to say, ‘We’ve got to understand the algorithms, we’ve got to have them documented’, but perhaps there are certain kinds of AI practice (such as those involved in medical diagnosis from imaging input) where it is just not going to be possible.
From a blog by Conrad Taylor, June 2018
Some suggested reading
March 2019 Seminar: Open Data
/in Events 2019, K and I sharing: open data, Knowledge and information sharing, Previous Events/by Netikx EventsSummary
At this meeting David Penfold gave an introduction to the applications and implications of Open Data and the related topic of Linked Data. As more and more data is generated daily, and even by the minute, how that data is used and what information can be obtained from it becomes more and more significant. An important aspect of this is Open Data and the related topic of Linked Data. This meeting looked at these topics and reviewed how the use of Open and Linked Data can make access to information and how it is used much more powerful.
The meeting mainly consisted of a general (fairly non-technical) introduction to the subject from David Penfold, who gave examples of how open data is used by organisations such as Network Rail. He showed excerpts from presentations from Sir Tim Berners-Lee and Sir Nigel Shadbolt and concluded with a consideration of the ethics of Open Data and the implications of AI.
Speaker
Dr David Penfold is vice-chairman of NetIKX and has worked for many years in publishing, with a particular emphasis on content, structured documents and information management within a publishing context. He has previously been Chair of the British Computer Society Electronic Publishing Specialist Group and a Senior Lecturer at the London College of Communication (Deputy Course Director of the MA in Publishing). He is currently Convenor of the terminology Working Group of the ISO Technical Committee on Graphic Technology and a founder member of the recently formed IK SpringBoard, which is working on methods of implementation of the revised CILIP/KPMG report on Information as an Asset.
Time and Venue
2pm on 20th March 2019, The British Dental Association, 64 Wimpole Street, London W1G 8YS
Pre Event Information
None
Slides
No slides available for this presentation
Tweets
#netikx97
Blog
A report has been posted on the NetIKX blog
Study Suggestions
Have a look at the website for the Open Data Institute https://www.google.com/search?client=firefox-b-d&q=open+data+institute
Blog for January 2019: Wikipedia & knowledge sharing
/in Netikx, Uncategorised/by AlisonIn January 2019, NetIKX held a seminar on the topic – Wikipedia and other knowledge-sharing experiences. Andy Mabbett gave a talk about one of the largest global projects in knowledge gathering in the public sphere; Wikipedia and its sister projects. Andy is an experienced editor of Wikipedia with more than a million edits to his name. He worked in website management and always kept his eyes open for new developments on the Web. When he heard about the Wikipedia project, founded in 2001, he searched there for information about the local nature reserves. He is a keen bird-watcher. There was nothing to be found and this inspired him to add his first few entries. He has been a volunteer since 2003 and makes a modest living with part of his income stream coming from training and helping others become Wikipedia contributors too. The volunteers are expected to write publicly accessible material, not create new information. The sources can be as diverse and scattered as necessary, but Wikipedia pulls that information together coherently and give links back to the sources.
The Wikipedia Foundations which hosts Wikipedia says: ‘imagine a world in which every single human being can freely share in the sum of all knowledge. That is our commitment.’
Wikipedia is the free encyclopaedia that anybody can edit. It is built by a community of volunteers contributing bit by bit over time. The content is freely licensed for anybody to re-use, under a ‘creative commons attribution share-alike’ licence. You can take Wikipedia content and use it on your own website, even in commercial publications and all you have to do in return is to say where you got it from. The copyright in the content remains the intellectual property of the people who have written it.
The Wikimedia Foundation is the organisation which hosts Wikipedia. They keep the servers and the software running. The Foundation does not manage the content. It occasionally gets involved over legal issues for example, child protection but otherwise they don’t set editorial policy or get involved in editorial conflicts. That is the domain of the community.
Guidelines and principles.
Wikipedia operates according to a number of principles called the ‘five pillars’.
In Wikipedia, all contributors are theoretically equal and hold each other to account. There is no editorial board, there are no senior editors who carry a right of overrule or veto. ‘That doesn’t quit work in theory’ says Andy, ‘but like the flight of the bumblebee, it works in practice’. For example, in September 2018, newspapers ran a story that the Tate Gallery had decided to stop writing biographies of artists for their Website. They would use copies of Wikipedia articles instead. The BBC does the same, with biographies of musicians and bands on their website and also with articles about species of animals. The confidence of these institutions comes because it is recognised that Wikipedians are good at fact-checking and that if errors are spotted or assertions made without a supporting reliable reference they get flagged up. But there are some unintended consequences too. Because dedicated Wikipedians have the habit of checking articles for errors and deficits, Wikipedia can be a very unfriendly place for new and inexperienced editors. A new article can get critical ‘flags to show something needs further attention. People can get quite zealous about fighting conflicts of interest, or bias or pseudo-science.
For most people there is just one Wikipedia. But there are nearly 300 Wikipedias in different languages. Several have over a million articles, some only a few thousand. Some are written in a language threatened with extinction and they constitute the only place where a community of people is creating a website in that language, to help preserve it as much as to preserve the knowledge.
Wikipedia also has a number of ‘sister projects’. These include:
Probably the Wikidata project is the most important of the sister projects, in terms of the impact it is having and its rate of expansion. Many Wikipedia articles have an ‘infobox’ on the right side. These information boxes are machine readable as they have a microformat mark-up behind the scenes. From this came the idea of gathering all this information centrally. This makes it easier to share across different versions of Wikipedia and it means all the Wikipedias can be updated together, for example, if someone well known dies. Under their open licence, data can be used by any other project in the world. Using the Wikidata identifiers for millions of things, can help your system become more interoperable with others. As a result, there is a huge asset of data including that taken from other bodies (for example English Heritage or chemistry databases etc.
Wikipedia has many more such projects that Andy explained to us and the information was a revelation to most of us. So we were then delighted to spend some time looking at an exercise in small groups. This featured two speakers who talked about the way they had used a shared Content Management system to gather and share knowledge. These extra speakers circulated round the groups to help the discussions. The format was different to NetiKX usual breakout groups but feedback from participants was very positive.
This blog is based on a report by Conrad Taylor.
To see the full report you can follow this link: Conradiator : NetIKX meeting report : Wikipedia & knowledge sharing
January 2019 Seminar: Making and sharing knowledge in communities: can wikis and related tools help?
/in Events 2019, Harnessing the web for information and knowledge exchange, Previous Events/by Netikx EventsSummary
At this meeting Andy Mabbett, a hugely experienced Wikipedia editor, gave an introduction to the background of Wikipedia and discussed many of the issues that it raises.
Accumulating, organising and sharing knowledge is never easy; this is the problem Knowledge Management sought to address. Today we hope networked electronic platforms can facilitate the process. They are never enough in themselves, because the issues are essentially human, to do with attitudes, social dynamics and work culture — but good tools certainly help.
In past seminars, NetIKX has looked at MS Sharepoint, but that is proprietary and commercial, and it doesn’t work for wider communities of practice and interest. In this seminar, we looked at a range of alternatives, some of them free of charge and/or open source, together with the social dynamics that make them succeed or fail.
First we looked at the wiki model. The case study was Wikipedia — famous, but poorly understood. Andy Mabbett presented this. Andy is a hugely experienced Wikipedia editor, who inspires respect and affection around the world for his ability to explain how Wikipedia works, and for training novices contributing content – including as a ‘Wikipedian In Residence’ encouraging scientific and cultural organisations to contribute their knowledge to Wikipedia.
A few stats: Wikipedia, the free online encyclopedia that anyone can in theory edit, has now survived 18 years, existing on donations and volunteering. It has accumulated over 40 million articles in 301 languages, and about 500 million visitors a month. The English edition has nearly 5.8 million articles. There are about 300,000 active contributors, of whom 4,000 make over a hundred edits annually.
Under the wider banner of ‘Wikimedia’, there are sister projects such as Wiktionary, Wikiversity, which hosts free learning materials, Wikidata, which is developing a large knowledge base, and the Wikimedia Commons, which holds copyright-free photos, audio and other multimedia resources.
And yet, as the Wikipedia article on Wikipedia admits, “Wikipedia has been criticized for exhibiting systemic bias, for presenting a mixture of ‘truths, half truths, and some falsehoods’, and for being subject to manipulation and spin in controversial topics.” This isn’t so surprising, because humans are involved. It’s a community that has had to struggle with issues of authority and quality control, partiality and sundry other pathologies. Andy provided insight into these problems, and explained how the Wikipedia community organises itself to define, defend and implement its values.
No NetIKX seminar would be complete without syndicate sessions, conducted in parallel table groups. For the second half of the afternoon, each group was presented in turn with tales from two further case studies of knowledge sharing using different platforms and operating under different rules. These endeavours might have used email lists, Google Docs, another kind of wiki software, or some other kind of groupware. There were tales of triumph, but of tribulation too.
At the end of the afternoon polling thoughts helped to identify key factors that may point the way towards building better ways of sharing knowledge.
Speakers
Andy Mabbett has been a Wikipedia editor (as User:Pigsonthewing) since 2003 and involved with Wikidata since its inception in 2012. He has given presentations about Wikimedia projects on five continents, and has a great deal of experience working with organisations that wish to engage with Wikipedia and its sister projects. With a background in programming and managing websites for local government, Andy has been ‘Wikimedian in Residence’ at ORCID; TED; the Royal Society of Chemistry; The Physiological Society; the History of Modern Biomedicine Research Group; and various museums, galleries and archives. He is also the author of three books on the rock band Pink Floyd.
Our case-study witnesses
Sara Culpin is currently Head of Information & Knowledge at CRU International, where she has implemented a successful information and knowledge strategy on a shoestring budget. Since graduating from Loughborough University, she has spent over 25 years in information and knowledge roles at Aon, AT Kearney, PwC, and Deloitte. She is passionate about getting colleagues to share their knowledge across their organisations, while ensuring that their senior managers see the business value. https://www.linkedin.com/in/sara-culpin-2a1b051
Dr Richard Millwood has a background in school maths education, with a history of applying computers to education, and is Director of Core Education UK. As a researcher in the School of Computer Science & Statistics, Trinity College Dublin, he is developing a community of practice for computer science teachers in Ireland and creating workshops for families to develop creative use of computers together. In the 1990s Richard worked with Professor Stephen Heppell to create Ultralab, the learning technology research centre at Anglia Polytechnic University, acting as head 2005–2007. He researched innovation in online higher education in the Institute for Educational Cybernetics at the University of Bolton until 2013, gaining a PhD by Practice in ‘The Design of Learner-centred, Technology-enhanced Education’.
Time and Venue
2pm on 24th January 2019, The British Dental Association, 64 Wimpole Street, London W1G 8YS
Pre Event Information
None
Slides
No slides available for this presentation
Tweets
#netikx96
Blog
See our blog report: Wikipedia & knowledge sharing
Study Suggestions
Andy Mabbett, experienced Wikipedian
Andy Mabbett is an experienced editor of Wikipedia with more than a million edits to his name. Here’s a link to a ‘Wikijabber’ audio interview with Andy by Sebastian Wallroth (Sept 2017)
https://wikijabber.com/wikijabber-0005-with-pigsonthewing/
Blog for the November 2018 seminar: Networks
/in Netikx, Uncategorised/by AlisonThe rise of on-line social network platforms such as Facebook has made the general population more network-aware. Yet, at the same time, this obscures the many other ways in which network concepts and analysis can be of use. Network Science was billed as the topic for the November 2018 NetIKX seminar, and in hopes that we would explore the topic widely, I did some preliminary reading.
I find that Network Science is perhaps not so much a discipline in its own right, as an approach with application in many fields – analysis of natural and engineered geography, transport and communication, trade and manufacture, even dynamic systems in chemistry and biology. In essence, the approach models ‘distinct elements or actors represented by nodes (or vertices) and the connections between [them] as links (or edges)’ (Wikipedia), and has strong links to a branch of mathematics called Graph Theory, building on work by Euler in the 18th century.
In 2005, the US National Academy of Sciences was commissioned by the US Army to prepare a general report on the status of Network Science and its possible application to future war-fighting and security preparedness: the promise was, that if the approach looked valuable, the Army would put money into getting universities to study the field. The NAS report is available publicly at http://nap.edu/11516 and is worth a read. It groups the fields of application broadly into three: (a) geophysical and biological networks (e.g. river systems, food webs); (b) engineered networks (roads, electricity grid, the Internet); and (c) social networks and institutions.
I’ve prepared a one-page summary, ‘Network Science: some instances of networks and fields of complex dynamic interaction’, which also lists some further study resources, five books and an online movie. (Contact NetIKX if you want to see this). In that I also note: ‘We cannot consider the various types of network… to be independent of each other. Amazon relies on people ordering via the Internet, which relies on a telecomms network, and electronic financial transaction processing, all of which relies on the provision of electricity; their transport and delivery of goods relies on logistics services, therefore roads, marine cargo networks, ports, etc.’
The NetIKX seminar fell neatly into two halves. The first speaker, Professor Yasmin Merali of Hull University Business School, offered us a high-level theoretical view and the applications she laid emphasis on were those critical to business success and adaptation, and cybersecurity. Drew Mackie then provided a tighter focus on how social network research and ‘mapping’ can help to mobilise local community resources for social welfare provision.
Drew’s contribution was in some measure a reprise of the seminar he gave with David Wilcox in July 2016. Another NetIKX seminar which examined the related topics of graph databases and linked data graphs is that given by Dion Lindsay and Dave Clarke in January 2018.
Yasmin Merali noted that five years ago there wasn’t much talk about systems, but now it is commonplace for problems to be identified as ‘systemic’. Yet, ironically, Systems Thinking used to be very hot in the 1990s, later displaced by a fascination with computing technologies. Now once again we realise that we live in a very complex and increasingly unpredictable world of interactions at many levels; where the macro level has properties and behaviours that emerge from what happens at the micro level, without being consciously planned for or even anticipated. We need new analytical frameworks.
Our world is a Complex Adaptive System (CAS). It’s complex because of its many interconnected components, which influence and constrain and feed back upon each other. It is not deterministic like a machine, but more like a biological or ecological system. Complex Adaptive Systems are both stable (persistent) and malleable, with an ability to transform themselves in response to environmental pressures and stimuli – that is the ‘adaptive’ bit.
We have become highly attuned to the idea of networks through exposure to social media; the ideas of ‘gatekeepers’, popularity and influence in such a network are quite easy to understand. But this is selling short the potential of network analysis.
In successful, resilient systems, you will find a lot of diversity: many kinds of entity exist and interact within them. The links between entities in such systems are equally diverse. Links may persist, but they are not there for ever, nor is their nature static. This means the network can be ‘re-wired’, which makes adaptation easier.
Amazing non-linear effects can emerge from network organisation, and you can exploit this in two ways. If adverse phenomena are encountered, the network can implement a corrective feedback response very quickly (for example, to isolate part of the network, which is the correct public health response in the case of an epidemic). Or, if that reaction isn’t going to have the desired effect, we can try to re-wire the network, dampening some feedback loops, reinforcing others, and thus strengthening those ‘constellations’ of links which can best rise to the situation.
Information flows in the network. Yasmin offered us as analogy, the road network system, and distinct to that, the traffic running across that network. People writing about the power of social media have been concentrating on the network structure (the nodes, and the links), but not so much on the factors which enable or inhibit different kinds of dynamic within that structure.
Networks can enable efficient utilisation of distributed resources. We can also see networks as the locus where options are generated. Each change in a network brings about new conditions. But the generative capacity does come at a cost: you must allow sufficient diversity. Even if there are elements which don’t seem useful right now, there is a value in having redundant components: that’s how you get resilience.
You might extend network thinking outwards, beyond networking within one organisation, towards a number of organisations co-operating or competing with each other. Some of your potential partners can do better in the current system and with their resources than you; in another set of circumstances, it might be you who can do better. If we can co-operate, each tackling the risks we are best able to cope with, we can spread the overall risk and increase the capability pool.
Yasmin referred to the idea of ‘Six Degrees of Separation’ – that through intermediate connections, each of us is just six link-steps away from anybody else. The idea was important in the development of social network theory, but it turns out to have severe limitations, because where links are very tenuous, the degree of access or influence they imply can be illusory. That’s why simplistic social network graphs can be deceptive.
In a regular ‘small worlds’ network, everyone is connected to the same number of people in some organised way, and even one extra random link shortens the path length. It’s possible to ‘re-wire’ a network to get more of these small-world effects, with the benefit of making very quick transitions possible.
But there is another kind of network, similar in structure to the Internet and most of the biological systems we might consider – and that’s what we can call the ‘scale-free’ network. In this case, there is no cut-off limit to how large, or how well-connected a node can be.
Networks are also ‘lumpy’ – in large networks, there are very large hubs, but also adjacent less-prominent hubs, which in an Internet scenario are less likely to be attacked or degraded. This gives some hope that the system as a whole is less likely to be brought to its knees by a random attack; but a well-targeted attack against the larger hubs can indeed inflict a great deal of damage. This is something that concerns security-minded designers of networks for business. It is strategically imperative to have good intelligence about what is going on in a networked system – what are the entities, which of them are connected, and what is the nature of those connections and the information flows between them.
It’s important to distinguish between resilience and robustness. Resilience often comes from having network resources in place which may be redundant, may appear to be superfluous or of marginal value, but they provide a broader option space and a better ability to adapt to changing circumstance.
Looking more specifically at social networks, Yasmin referred to the ‘birds of a feather flock together’ principle, where people are clustered and linked based on similar values, aspirations, interests, ways of thinking etc. Networks like this are often efficient and fast to react, and much networking in business operates along those lines. However, within such a network, you are unlikely to encounter new, possibly valuable alternative knowledge and ways of thinking.
Heterogeneity of linkages may propagate along weaker links, but are valuable for expanding the knowledge pool. Expanded linkages may operate along the ‘six degrees’ principle, and through intermediate friends-of-friends, who serve both as transmitters and as filters. And yet a trend has been observed for social network engines (such as Facebook) to create a superdominance of ‘birds of a feather’ types of linkages, leading to confirmation bias and even polarisation.
In traditional ‘embodied’ social networks, people bonded and transacted with others whom they knew in relatively persistent ways, and could assess through an extended series of interactions in a broadly understandable context. In the modern cybersocial network, this is more difficult to re-create, because interactions occur through ‘shallow’ forms such as text and image – information is the main currency – and often between people who do not really know each other.
Another problem is the increased speed of information transfer, and decreased threshold of time for critical thought. Decent journalism has been one of the casualties. Yes, ‘citizen journalism’ via tweet or online video post can provide useful information – such informants can often go where the traditional correspondent could not – but verification becomes problematic, as does getting the broader picture, when competition between news channels to be first with the breaking story ‘trumps’ accuracy and broader context.
If we think of cybersocial networks as information networks, carrying information and meaning, things become interesting. Complexity comes not just from the arrangement of links and nodes, but also from the multiple versions of information, and whether a ‘message’ means the same to each person who receives it: there may be multiple frameworks of representation and understanding standing between you and the origin of the information.
This has ethical implications. Some people say that the Internet has pushed us into a new space. Yasmin argues that many of the issues are those we had before, only now more intensely. If we think about the ‘gig economy’, where labour value is extracted but workers have scant rights – or if we think about the ownership of data and the rights to use it, or surveillance culture – these issues have always been around. True, those problems are now being magnified, but maybe that cloud has a silver lining in forcing legislators to start thinking about how to control matters. Or is it the case that the new technologies of interaction have embedded themselves at such a fundamental level that we cannot shift them?
What worries Yasmin more are issues around Big Data. As we store increasingly large, increasingly granular data about people from sources such as fitbits, GPS trackers, Internet-of-Things devices, online searches… we may have more data, but are we better informed? Connectivity is said to be communication, but do we understand what is being said? The complexity of the data brings new challenges for ethics – often, you don’t know where it comes from, what was the quality of the instrumentation, and how to interpret the data sets.
And then there is artificial intelligence. The early dream was that AI would augment human capability, not displace it. In practice, it looks as if AI applications do have the potential to obliterate human agency. Historically, our frameworks for how to be in the world, how to understand it, were derived from our physical and social environment. Because our direct access to the physical world and the raw data derived from it is compromised, replaced by other people’s representation of other people’s possible worlds, we need to figure out whose ‘news’ we can trust.
When we act in response to the aggregated views of others, and messages filtered through the media, we can end up reinforcing those messages. Yasmin gave as an example rumours of the imminent collapse of a bank, causing a ‘bank run’ which actually does cause the bank’s collapse (in the UK, an example was the September 2007 run on Northern Rock). She also recounted examples of the American broadcast media’s spin on world events, such as the beginning of the war in Iraq, and 9/11. People chose to tune into to those media outlets whose view of the world they preferred. (‘Oh honey, why do you watch those channels? It’s so much nicer on Fox News.’
There is so much data available out there, that a media channel can easily find provable facts and package them together to support its own interpretation of the world. This process of ‘cementation’ of the silos makes dialogue between opposed camps increasingly difficult – a discontinuity of contemporaneous worlds. This raises questions about the way our contextual filtering is evolving in the era of the cybersocial. And if we lose our ‘contextual compass’, interpreting the world becomes more problematic.
In Artificial Intelligence, there are embedded rules. How does this affect human agency in making judgements? One may try to inject some serendipity into the process – but serendipity, said Yasmin, is not that serendipitous.
Yasmin left us with some questions. Who controls the network, and who controls the message? Should we be sitting back, or are their ethical considerations that mean we should be actively worrying about these things and doing what we can? What is it ethical not to have known, when things go wrong?
Drew Mackie prepares network maps for organisations; most of the examples he would give are in the London area. He declared he would not be talking about network theory, although much is implied, and underlies what he would address.
Mostly, Drew and his associates work with community groups. What they seek to ‘map’ are locally available resources, which may themselves be community groups, or agencies. In this context, one way to find out ‘where stuff is’ is to consult some kind of catalogue, such as those which local authorities prepare. And a location map will show you where stuff is. But when it comes to a network map, what we try to find out and depict is who collaborates with whom, across a whole range of agencies, community groups, and key individuals.
When an organisation commissions a network map from Drew, they generally have a clear idea of what they want to do with it. They may want to know patterns of collaboration, what assets are shared, who the key influencers are, and it’s because they want to use that information to influence policy, or to form projects or programmes in that area.
Drew explained that the kinds of network map he would be talking about are more than just visual representations that can be analysed according to various metrics. They are also a kind of database: they hold huge amounts of data in the nodes and connections, about how people collaborate, what assets they hold, etc. So really, what we create is a combination of a database and a network map, and as he would demonstrate, software can help us maintain both aspects.
If you want to build such a network map, it is essentially to appoint a Map Manager to control it, update it, and also promote it. Unless you generate and maintain that awareness, in six months the map will be dead: people won’t understand it, or why it was created.
Residents in the area may be the beneficiaries, but we don’t expect them to interact with the map to any great extent. The main users will be one step up. To collect the information that goes into building the map, and to encourage people to support the project, you need people who act as community builders; Drew and his colleagues put quite a lot of effort in training such people.
To do this, they use two pieces of online software: sumApp, and Kumu. SumApp is the data collection program, into which you feed data from various sources, and it automatically builds you a network map through the agency of Kumu, the network visualisation and analytics tool. Data can be exported from either of these.
When people contribute their data to such a system, what they see online is the sumApp front end; they contribute data, then they get to see the generated network map. No-one has to do any drawing. SumApp can be left open as a permanent portal to the network map, so people can keep updating their data; and that’s important, because otherwise keeping a network map up to date is a nightmare (and probably won’t happen, if it’s left to an individual to do).
The information entered can be tagged with a date, and this allows a form of visualisation that shows how the network changes over time.
Drew then showed us how sumApp works, first demonstrating the management ‘dashboard’ through which we can monitor who are the participants, the number of emails sent, connections made and received, etc. So that we can experience that ourselves should we wish, Drew said he would see about inviting everyone present to join the demonstration map.
Data is gathered in through a survey form, which can be customised to the project’s purpose. To gather information about a participant’s connections, sumApp presents an array of ‘cards’, which you can scroll through or search, to identify those with whom you have a connection; and if you make a selection, a pop-up box enquires how frequently you interact with that person – in general, that correlates well with how closely you collaborate – and you can add a little story about why you connect. Generally that is in words, but sound and video clips can also be added.
Having got ‘data input’ out of the way, Drew showed us how the map can be explored. You can see a complete list of all the members of the map. If you were to view the whole map and all its connections, you would see an undecipherable mess; but by selecting a node member and choosing a command, you can for example fade back all but the immediate (first-degree) connections of one node (he chose our member Steve Dale as an example). Or, you could filter to see only those with a particular interest, or other attribute in common.
Drew also demonstrated that you can ask to see who else is connected to one person or institution via a second degree of connection – for example, those people connected to Steve via Conrad. This is a useful tool for organisations which are seeking to understand the whole mesh of organisations and other contacts round about them. Those who are keenest in using this are not policy people or managers, but people with one foot in the community, and the other foot in a management role. People such as children’s centre managers, or youth team leaders – people delivering a service locally, but who want to understand the broader ecology…
Kumu is easy to use, and Drew and colleagues have held training sessions for people about the broad principles, only for those people to go home and, that night, draw their own Kumu map in a couple of hours – not untypically including about 80 different organisations.
Drew also demonstrated a network map created for the Centre for Ageing Better (CFAB). With the help of Ipsos MORI, they had produced six ‘personas’ which could represent different kinds of older people. One purpose of that project was to see how support services might be better co-ordinated to help people as they get older. Because Drew also talked through this in the July 2016 NetIKX meeting, I shall not cover it again here.
Drew also showed an example created in Graph Commons (https://graphcommons.com/). This network visualisation software has a nice feature that lets you get a rapid overview of a map in terms of its clusters, highlighting the person or organisation who is most central within that cluster, aggregating clusters for viewing purposes into a single higher-level node, and letting you explore the links between the clusters. The developers of sumApp are planning a forthcoming feature that will let sumApp work with Graph Commons as an alternative graph engine to Kumu.
In closing, Drew suggested that as a table-group exercise we should discuss ideas for how these insights, techniques and tools might be useful in our own work situations; note these on a sheet of flip-chart paper; and then we could later compare the outputs across tables.
Conrad Taylor
November 2018 Seminar: The Networkness of Networks
/in Events 2018, K and I sharing: networking, Knowledge and information sharing, Previous Events/by Netikx EventsSummary
At this meeting Yasmin Merali, Professor of Systems Thinking and Director of the Centre for Systems Studies at Hull University Business School, and Drew Mackie gave an introduction to network science and demonstrated some practical applications.
Speakers
Yasmin Merali is Professor of Systems Thinking and Director of the Centre for Systems Studies at Hull University Business School. Prior to that she was Co-director of the Doctoral Training Centre for Complexity Science at the University of Warwick and served as Director of Warwick Business School’s Information Systems Research Unit until 2006. Professor Merali is an Expert Evaluator for the EU and was elected to the Executive Committee of the Council of the European Complex Systems Society in 2012 and the Board of the UNESCO Unitwin Complex Systems Digital Campus in 2013. Her research is trans-disciplinary, using complexity theory to address issues of transformation in internet-enabled socio-economic contexts, focusing on network dynamics and the emergence and co-evolution of socio-economic structures. She has extensive consultancy experience in public, private, and third sector organizations, and received a BT Fellowship and an IBM Faculty Award for her work on knowledge management and complexity.
Drew Mackie is a recognised expert in the Kumu online system of network visualisation and is particularly interested in using network methods to evaluate changes in connectivity over the life of projects.
Drew has been active in the Joined Up Digital project for the Centre for Ageing Better, following an exploration into Living Well in the Digital Age with the Age Action Alliance. He has also been involved in social network mapping for the Croydon Best Start programme.
Time and Venue
2pm on 15th November 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS
Pre Event Information
The internet and advances in information and communications are implicated in the emergence of the network economy and the network society. Greater connectivity and access to increased variety and volume of information enable new and complex forms of organisation. This presents opportunities and threats that are challenging both, public and private sector institutions.
This session looks at the quest for more effective ways of dealing with the uncertainties and dynamism of the network economy whilst maximising the opportunities afforded by the Internet and associated technologies. The main speaker was Professor Yasmin Merali, who explores how understanding the “networkness” of networks may enable us to understand the emerging context and to harness network forms of organisation to deliver transformational capacity or stability as appropriate in the face of environmental turbulence.
The afternoon will then feature practical discussion, in which those present can share examples from their own experience. This will be facilitated by Drew Mackie, who has a huge range of practical expertise working in this field.
This seminar will be our ‘Community Network’ meeting to which we welcome practitioners from our colleagues in other IKM networks as our guests.
Slides
No slides available
Tweets
#netikx95 There were no tweets from this meeting due to a power cut.
Blog
See our blog report: Networks
Study Suggestions
Have a look at the Centre for Systems Studies at Hull University: https://www.google.com/search?client=firefox-b-d&q=Centre+for+Systems+Studies+%7C+University+of+Hull
Taxonomy Bootcamp
/in Netikx/by AlisonTaxonomy Bootcamp is happening again on the 16-17th October.
As a partnership organisation, NetIKX is able to offer members a 25% discount, the code for which has been sent to all members. If you are a member and have not received this, please email info[at]netikx.org.uk.
Key features:
For more details and to register, go to:
http://www.taxonomybootcamp.com/London/2018/default.aspx
Blog for the September 2018 Seminar: Ontology is cool!
/in Netikx, Uncategorised/by AlisonOur first speaker, Helen Lippell, is a freelance taxonomist and is an organiser of the annual Taxonomy Boot Camp in London. She also works with organisation on constructing thesauri, ontologies and link data repositories. As far as she is concerned, the point of ontology construction is to model the world to help meet business objectives, and that’s the practical angle from which she approached the topic. Taxonomies and ontologies are strongly related. Taxonomies are concerned with the relationships between the terms used in a domain, ontologies focus more on describing the things within the domain and the relationships between them. Neither is inherently better: you choose what is appropriate for your business need. An ontology offers greater capabilities and a gateway to machine reasoning, but if you don’t need those, the extra effort will not be worth it. A taxonomy can provide the controlled vocabularies which help with navigation and search.
Using fascinating examples, Helen, listed a number of business scenarios in which ontologies can be helpful: information retrieval, classification, tagging and data manipulation. She is doing a lot of work currently on an ontology that will help in content aggregation and filtering, automating a lot of processes that are currently manual.
Implementing an ontology project is not trivial. It starts with a process of thoroughly understanding and modelling everything connected to the particular domain in which the project and business operate. Information professionals are well suited to link between the people with technical skills and others who know the business better and can advocate for the end-users of these systems.
Finally, Helen discussed the software that can facilitate this work, both free and to be purchased. Her talk was followed by an exercise where we produced our own model, with plenty of help and advice from the speakers. We looked at problems in London that we could help solve such as guiding visitors to London or a five-year ecology plan. It was fun, although we were not quite up to achieving a high-quality product ready to change the world!
In the second part of the meeting, we heard from Silver Oliver, an information architect. Again, there was a short talk and then a practical exercise. We learnt that Domain Modelling is fundamental to compiling successful taxonomies, controlled vocabularies and classifications schemes, as well as formal ontologies. When you set out to model a domain, it is beneficial to engage as many voices and perspectives as possible. It is helpful to do this before you start exploring tools and implementations so that you don’t exclude people from being able to participate with their different views and perspectives. The exercise that followed looked at creating a website focusing on food and recipes, which was a pleasant topic to work on in our small groups.
The seminar finished with a set of recommendations:
That led to a break with refreshments and general conversations based on our experiences during the afternoon.
Extract from a report by Conrad Taylor.
If you want to read the full account of this seminar – follow this link:
https://www.conradiator.com/kidmm/netikx-ontology-domains-sept2018.html
September 2018 Seminar: Ontologies and domain modelling: a fun (honest!) and friendly introduction
/in Events 2018, Knowledge and information organisation and modelling, Organisation and modelling: taxonomy, Previous Events/by Netikx EventsSummary
At this lively meeting Helen Lippell and Silver Oliver introduced ontologies and explained how they could be used. Michael Smethurst and Anya Somerville ran an interactive practical session
Speakers
Helen Lippell has run her own consultancy since 2007, working as a specialist in taxonomy, metadata, ontologies and enterprise search. She loves getting stuck into projects and working with clients to figure out how best to use the messy content and data they have. She has supported organisations such as the BBC, gov.uk, Financial Times, Pearson, and Electronic Arts.
Silver Oliver has worked as an Information Architect for many years. Previously he has worked with the BBC, British Library and government. For the last 10 years he has worked at Data Language, a small consultancy specialising in semantics. His areas of expertise include all areas of information architecture but focuses primarily on the role of domain modelling in delivering design solutions.
Michael Smethurst has worked as an Information Architect for over ten years. Prior to working for the UK Parliament, he worked at the BBC and BBC R&D on a variety of projects, ranging from programmes, iPlayer, news, sport and food. Here he brought together practices from the semantic web and the domain-driven design community. He now works as a data architect for the UK Parliament using the same methods to understand and document parliamentary processes, work flows and data flows.
Anya Somerville is Head of Indexing and Data Management for the House of Commons Library, where she leads a team of information specialists. The team adds subject indexing, links and other metadata to parliamentary business data. It also manages Parliament’s controlled vocabulary. Anya and her team work closely with Michael and Silver on the domain models for parliamentary business. A pdf flyer for this meeting can be downloaded from the link Ontologies and domain modelling
Time and Venue
2pm on 20th September 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS
Pre Event Information
What exactly is an ontology? How can we use them to better understand our information environments? Helen Lippell and Silver Oliver will be explaining all, providing examples from projects they have worked on, and giving you the chance to build your own ontology and domain model. Helen will give an accessible introduction to what ontologies are, how they are being used in a variety of different applications, how they differ from taxonomies, and how you can combine taxonomies and ontologies in models. This introduction assumes no prior knowledge of ontologies or semantic technologies.
Silver will be explaining how ontologies are used in domain modelling, demystifying some of the terminology, and providing case studies to demonstrate ontologies in practice. There will be the chance to get pens and paper out to produce and develop your own ontology and domain model, with additional help from experienced domain modellers Michael and Anya. You will learn the basic ideas around ontologies and domain modelling and see how ontologies can be used to better understand our information environments. You will begin to learn how to develop and use ontologies
Slides
Slides available.
Tweets
#netikx94
Blog
See our blog report: Ontologies and Domain Modelling
Study Suggestions
Take a look at the Simple Knowledge Organization System Namespace Document
https://www.w3.org/2009/08/skos-reference/skos.html
–
Blog for the July 2018 seminar: Machines and Morality: Can AI be Ethical?
/in Events 2007, Harnessing the web for information and knowledge exchange, Netikx, Previous Events, Uncategorised/by AlisonIn discussions of AI, one issue that is often raised is that of the ‘black box’ problem, where we cannot know how a machine system comes to its decisions and recommendations. That is particularly true of the class of self-training ‘deep machine learning’ systems which have been making the headlines in recent medical research.
Dr Tamara Ansons has a background in Cognitive Psychology and works for Ipsos MORI, applying academic research, principally from psychology, to various client-serving projects. In her PhD work, she looked at memory and how it influences decision-making; in the course of that, she investigated neural networks, as a form of representation for how memory stores and uses information.
At our NetIKX seminar for July 2018, she observed that ‘Artificial Intelligence’ is being used across a range of purposes that affect our lives, from mundane to highly significant. Recently, she thinks, the technology has been developing so fast that we have not been stepping back enough to think about the implications properly.
Tamara displayed an amusing image, an array of small photos of round light-brown objects, each one marked with three dark patches. Some were photos of chihuahua puppies, and the others were muffins with three raisins on top! People can easily distinguish between a dog and a muffin, a raisin and an eye or doggy nose. But for a computing system, such tasks are fairly difficult. Given the discrepancy in capability, how confident should we feel about handing over decisions with moral consequences to these machines?
Tamara stated that the ideas behind neural networks have emerged from cognitive psychology, from a belief that how we learn and understand information is through a network of interconnected concepts. She illustrated this with diagrams in which one concept, ‘dog’, was connected to others such as ‘tail’, ‘has fur’, ‘barks’ [but note, there are dogs without fur and dogs that don’t bark]. From a ‘connectionist’ view, our understanding of what a dog is, is based around these features of identity, and how they are represented in our cognitive system. In cognitive psychology, there is a debate between this view and a ‘symbolist’ interpretation, which says that we don’t necessarily abstract from finer feature details, but process information more as a whole.
This connectionist model of mental activity, said Tamara, can be useful in approaching some specialist tasks. Suppose you are developing skill at a task that presents itself to you frequently – putting a tyre on a wheel, gutting fish, sewing a hem, planning wood. We can think of the cognitive system as having component elements that, with practice and through re-inforcement, become more strongly associated with each other, such that one becomes better at doing that task.
Humans tend to have a fairly good task-specific ability. We learn new tasks well, and our performance improves with practice. But does this encapsulate what it means to be intelligent? Human intelligence is not just characterised by ability to do certain tasks well. Tamara argued that what makes humans unique is our adaptability, the ability to learnings from one context and applying them imaginatively to another. And humans don’t have to learn something over many, many trials. We can learn from a single significant event.
An algorithm is a set of rules which specify how certain bits of information are combined in a stepwise process. As an example, Tamara suggested a recipe for baking a cake.
Many algorithms can be represented with a kind of node-link diagram that on one side specifies the inputs, and on the other side the outputs, with intermediate steps between to move from input to output. The output is a weighted aggregate of the information that went into the algorithm.
When we talk about ‘learning’ in the context of such a system – ‘machine learning’ is a common phrase – a feedback or evaluation loop assesses how successful the algorithms are at matching input to acceptable decision; and the system must be able to modify its algorithms to achieve better matches.
Tamara suggests that at a basic level, we must recognise that humans are the ones feeding training data to the neural network system – texts, images, audio etc. The implication is that the accuracy of machine learning is only as good as the data you give it. If all the ‘dog’ pictures we give it are of Jack Russell terriers, it’s going to struggle at identifying a Labrador as a dog. We should also think about the people who develop these systems – they are hardly a model of diversity, and women and ethnic minorities are under-represented. The cognitive biases of the developer community can influence how machine learning systems are trained, what classifications they are asked to apply, and therefore how they work.
If the system is doing something fairly trivial, such as guessing what word you meant to type when you make a keyboarding mistake, there isn’t much to worry about. But what if the system is deciding whether and on what terms to give us insurance, or a bank loan or mortgage? It is critically important that we know how these systems have been developed, and by whom, to ensure that there are no unfair biases at work.
Tamara said that an ‘AI’ system develops its understanding of the world from the explicit input with which it is fed. She suggested that in contrast, humans make decisions, and act, on the basis of myriad influences of which we are not always aware, and often can’t formulate or quantify. Therefore it is unrealistic, she suggests, to expect an AI to achieve a human subtlety and balance in its .
However, there have been some very promising results using AI in certain decision-making contexts, for example, in detecting certain kinds of disease. In some of these applications, it can be argued that the AI system can sidestep the biases, especially the attentional biases, of humans. But there are also cases where companies have allowed algorithms to act in highly inappropriate and insensitive ways towards individuals.
But perhaps the really big issue is that we really don’t understand what is happening inside these networks – certainly, the really ‘deep learning’ networks where the hidden inner layers shift towards a degree of inner complexity which it is beyond our powers to comprehend. This is an aspect which Stephanie would address.
Stephanie Mathieson is the policy manager at ‘Sense About Science’, a small independent campaigning charity based in London. SAS was set up in 2002 as the media was struggling to cope with science-based topics such as genetic modification in farming, and the alleged link between the MMR vaccine and autism.
SAS works with researchers to help them to communicate better with the public, and has published a number of accessible topic guides, such as ‘Making Sense of Nuclear’, ‘Making Sense of Allergies’ and other titles on forensic genetics, chemical stories in the press, radiation, drug safety etc. They also run a campaign called ‘Ask For Evidence’, equipping people to ask questions about ‘scientific’ claims, perhaps by a politician asking for your vote, or a company for your custom.
But Stephanie’s main focus is around their Evidence In Policy work, examining the role of scientific evidence in government policy formation. A recent SAS report surveyed how transparent twelve government departments are about their use of evidence. The focus is not about the quality of evidence, nor the appropriateness of policies, just on being clear what evidence was taken into account in making those decisions, and how. In talking about the use of Artificial Intelligence in decision support, ‘meaningful transparency’ would be the main concern she would raise.
Sense About Science’s work on algorithms started a couple of years ago, following a lecture by Cory Doctorow, the author of the blog Boing Boing, which raised the question of ‘black box’ decision making in people’s lives. Around the same time, similar concerns were being raised by by the independent investigative newsroom ‘ProPublica’, and Cathy O’Neil’s book ‘Weapons of Math Destruction’. The director of Sense About Science urged Stephanie to read that book, and she heartily recommends it.
There are many parliamentary committees which scrutinise the work of government. The House of Commons Science and Technology Committee has an unusually broad remit. They put out an open call to the public, asking for suggestions for enquiry topics, and Stephanie wrote to suggest the role of algorithms in decision-making. Together with seven or eight others, Stephanie was invited to come and give a presentation, and she persuaded the Committee to launch an enquiry on the issue.
The SciTech Committee’s work was disrupted by the 2016 snap general election, but they pursued the topic, and reported in May 2018. (See https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news-parliament-2017/algorithms-in-decision-making-report-published-17-19-/)
Stephanie then treated us to a version of the ‘pitch’ which she gave to the Committee.
An algorithm is really no more than a set of steps carried out sequentially to give a desired outcome. A cooking recipe, directions for how to get to a place, are everyday examples. Algorithms are everywhere, many implemented by machines, whether controlling the operation of a cash machine or placing your phone call. Algorithms are also behind the analysis of huge amounts of data, carrying out tasks that would be beyond the capacity of humans, efficiently and cheaply, and bringing a great deal of benefit to us. They are generally considered to be objective and impartial.
But in reality, there are troubling issues with algorithms. Quite rapidly, and without debate, they have been engaged to make important decisions about our lives. Such a decision would in the past have been made by a human, and though that person might be following a formulaic procedure, at least you can ask a person to explain what they are doing. What is different about computer algorithms is their potential complexity and ability to be applied at scale; which means, if there are biases ingrained in the algorithm, or in the data selected for them to process, those shortcomings will also be applied at scale, blindly, and inscrutably.
In China, the government is developing a comprehensive ‘social credit’ system – in truth, a kind of state-run reputation ranking system – where citizens will get merits or demerits for various behaviours. Living in a modestly-sized apartment might add points to your score; paying bills late or posting negative comments online would be penalised. Your score would then determine what resources you will have access to. For example, anyone defaulting on a court-ordered fine will not be allowed to buy first-class rail tickets, or to travel by air, or take a package holiday. That scheme is already in pilots now, and is supposed to be fully rolled out as early as 2020.
(See Wikipedia article at https://en.wikipedia.org/wiki/Social_Credit_System and Wired article at https://www.wired.co.uk/article/china-social-credit.)
Stephanie suggested a closer look at the use of algorithms to rank teacher performance. Surely it is better to do so using an unbiased algorithm? This is what happened in the Washington school district in the USA – an example described in some depth in Cathy O’Neil’s book. At the end of the 2009–2010 school year, all teachers were ranked, largely on the basis of a comparison of their pupils’ test scores between one year and the next. On the basis of this assessment, 2% of teachers were summarily dismissed and a further 5% lost their jobs the following year. But what if the algorithms were misconceived, and the teachers thus victimised were not bad teachers?
In this particular case, one of the fired teachers was rated very highly by her pupils and their parents. There was no way that she could work out the basis of the decision; later it emerged that it turned on this consecutive-year test score proxy, which had not taken into account the baseline performance from which those pupils came into her class.
It cannot be a good thing to have such decisions taken by an opaque process not open to scrutiny and criticism. Cathy O’Neil’s examples have been drawn from the USA, but Stephanie is pleased to note that since the Parliamentary Committee started looking at the effects of algorithms, more British examples have been emerging.
Summary:
What next?
In the first place, we need to know where algorithms are being used to support decision-making, so we know how to challenge the decision.
When the SciTech committee published its report at the end of May, Stephanie was delighted that they took her suggestion to ask government to publish a list of all public-sector uses of algorithms, and where that use is being planned, where they will affect significant decisions. The Committee also wants government to identify a minister to provide government-wide oversight of such algorithms, where they are being used by the public sector, to co-ordinate departments’ approaches to the development and deployment of algorithms, and such partnerships with the private sector. They also recommended ‘transparency by default’, where algorithms affect the public.
Secondly, we need to ask for the evidence. If we don’t know how these decisions are being made, we don’t know how to challenge them. Whether teacher performance is being ranked, criminals sentenced or services cut, we need to know how those decisions are being made. Organisations should apply standards to their own use of algorithms, and government should be setting the right example. If decision-support algorithms are being used in the public sector, it is so important that people are treated fairly, that someone can be held accountable, and that decisions are transparent, and that hidden prejudice is avoided.
The public sector, because it holds significant datasets, actually holds a lot of power that it doesn’t seem to appreciate. In a couple of cases recently, it’s given data away without demanding transparency in return. A notorious example was the 2016 deal between the Royal Free Hospital and Google DeepMind, to develop algorithms to predict kidney failure, which led to the inappropriate transfer of personal sensitive data.
In the Budget of November 2017, the government announced a new Centre for Data Ethics and Innovation, but it hasn’t really talked about its remit yet. It is consulting on this until September 2018, so maybe by the end of the year we will know something. The SciTech Committee report had lots of strong recommendations for what its remit should be, including evaluation of accountability tools, and examining biases.
The Royal Statistical Society also has a council on data ethics, and the Nuffield Foundation set up a new commission, now the Convention on Data Ethics. Stephanie’s concern is that we now have several different bodies paying attention, but they should all set out their remits to avoid the duplication of work, so we know whose reports to read, and whose recommendations to follow. There needs to be some joined-up thinking, but currently it seems none are listening to each other.
Who might create a clear standard framework for data ethics? Chi Onwurah, the Labour Shadow Minister for Business, Energy and Industrial Strategy, recently said that the role of government is not to regulate every detail, but to set out a vision for the type of society we want, and the principles underlying that. She has also said that we need to debate those principles; once they are clarified, it makes it easier (but not necessarily easy) to have discussions about the standards we need, and how to define them and meet them practically.
Stephanie looks forward to seeing the Government’s response to the Science and Technology Committee’s report – a response which is required by law.
A suggested Code of Conduct came out in late 2016, with five principles for algorithms and their use. They are Responsibility – someone in authority to deal with anything that goes wrong, and in a timely fashion; Explainability – and the new GDPR includes a clause giving a right to explanation, about decisions that have been made about you by algorithms. (Although this is now law, but much will depend on how it is interpreted in the courts.) The remaining three principles are Accuracy, Auditability and Fairness.
So basically, we need to ask questions about the protection of people, and there have to be these points of challenge. Organisations need to ensure mechanisms of recourse, if anything does go wrong, and they should also consider liability. In a recent speakimg engagement on this topic, Stephanie was speaking to a roomful of lawyers, and to them she said, they should not see this as a way to shirk liability, but think about what will happen.
This conversation is at the moment being driven by the autonomous car industry, who are worried about insurance and insurability. When something goes wrong with an algorithm, whose fault might it be? Is it the person who asked for it to be created, and deployed it? The person who designed it? Might something have gone wrong in the Cloud that day, such that a perfectly good algorithm just didn’t work as it was supposed to? ‘People need to get to grips with these liability issues now, otherwise it will be too late, and some individual or group of individuals will get screwed over,’ said Stephanie, ‘while companies try to say that it wasn’t their fault.’
Regulation might not turn out to be the answer. If you do regulate, what do you regulate? The algorithms themselves, similar to the manner in which medicines are scrutinised by the medicines regulator? Or the use of the algorithms? Or the outcomes? Or something else entirely?
Companies like Google, Facebook, Amazon, Microsoft – have they lost the ability to be able to regulate themselves? How are companies regulating themselves? Should companies regulate themselves? Stephanie doesn’t think we can rely on that. Those are some of the questions she put to the audience.
Tamara took back the baton. She noted, we interact extensively with AI though many aspects of our lives. Many jobs that have been thought of as a human preserve, thinking jobs, may become more automated, handled by a computer or neural network. Jobs as we know them now may not be the jobs of the future. Does that mean unemployment, or just a change in the nature of work? It’s likely that in future we will be working side by side with AI on a regular basis. Already, decisions about bank loans, insurance, parole, employment increasingly rely on AI.
As humans, we are used to interacting with each other. How will we interact with non-humans? Specifically, with AI entities? Tamara referenced the famous ‘ELIZA’ experiment conducted 1964–68 by Joseph Weizenbaum, in which a computer program was written to simulate a practitioner of person-centred psychotherapy, communicating with a user via text dialogue. In response to text typed in by the user, the ELIZA program responded with a question, as if trying sympathetically to elicit further explanation or information from the user. This illustrates how we tend to project human qualities onto these non-human systems. (A wealth of other examples are given in Sherry Turkle’s 1984 book, ‘The Second Self’.)
However, sometimes machine/human interactions don’t happen so smoothly. Robotics professor Masahiro Mori studies this in the 1970s, studying people’s reaction to robots made to appear human. Many people responded to such robots with greater warmth as they were made to appear more human, but at a certain point along that transition there was an experience of unease and revulsion which he dubbed the ‘Uncanny Valley’. This is the point when something jarring about the appearance, behaviour or mode of conversation with the artificial human makes you feel uncomfortable and shatters the illusion.
‘Uncanny Valley’ research has been continued since Mori’s original work. It has significance for computer-generated on-screen avatars, and CGI characters in movies. A useful discussion of this phenomenon can be found in the Wikipedia article at https://en.wikipedia.org/wiki/Uncanny_valley
There is a Virtual Personal Assistant service for iOS devices, called ‘Fin’, which Tamara referenced (see https://www.fin.com). Combining an iOS app with a cloud-based computation service, ‘Fin’ avoids some of the risk of Uncanny Valley by interacting purely through voice command and on-screen text response. Is that how people might feel comfortable interacting with an AI? Or would people prefer something that attempts to represent a human presence?
Clare Parry remarked that she had been at an event about care robots, where you don’t get an Uncanny Valley effect because despite a broadly humanoid form, they are obviously robots. Clare also thought that although robots (including autonomous cars) might do bad things, they aren’t going to do the kind of bad things that humans do, and machines do some things better than people do. An autonomous car doesn’t get drunk or suffer from road-rage…
Tamara concluded by observing that our interactions with these systems shapes how we behave. This is not a new thing – we have always been shaped by the systems and the tools that we create. The printing press moved us from an oral/social method of sharing stories, to a more individual experience, which arguably has made us more individualistic as a society. Perhaps our interactions with AI will shape us similarly, and we should stop and think about the implications for society. Will a partnership with AI bring out the best of our humanity, or make us more machine-like?
Tamara would prefer us not to think of Artificial Intelligence as a reified machine system, but of Intelligence Augmented, shifting the focus of discussion onto how these systems can help us flourish. And who are the people that need that help the most? Can we use these systems to deal with the big problems we face, such as poverty, climate change, disease and others? How can we integrate these computational assistances to help us make the best of what makes us human?
There was so much food for thought in the lectures that everyone was happy to talk together in the final discussion and the chat over refreshments that followed. We could campaign to say, ‘We’ve got to understand the algorithms, we’ve got to have them documented’, but perhaps there are certain kinds of AI practice (such as those involved in medical diagnosis from imaging input) where it is just not going to be possible.
From a blog by Conrad Taylor, June 2018
Some suggested reading
The Guardian, 30 August 2018
https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger
July 2018 Seminar: Machines and Morality: Can AI be Ethical?
/in Events 2018, Machine learning and AI: automated decision making, Machine learning and AI: data analysis, Machine learning and artificial intelligence, Previous Events/by Netikx EventsSummary
At this meeting Stephanie Mathisen, Policy Manager at Sense About Science, and Tamara Ansons, Behavioural Science Consultant at Ipsos, addressed the question of the Ethics of Artificial Intelligence – is it possible for machines to have morality?
Speakers
Dr Tamara L Ansons is an expert on behavioural science. After receiving her PhD in Brain and Cognitive Sciences from the University of Manitoba, she did a post-doc in Marketing at the University of Michigan and then worked as an Assistant Professor of Marketing at Warwick Business School before moving to LSE to manage their Behavioural Research Lab. Her academic research focused on examining how subtle cognitive processes and contextual or situational factors non-consciously alter how individuals form judgments and behave. Much of this work has focused on how cognitive psychology can be applied to provide a deeper understanding of our interactions with technology – from online search behaviour, to social media and immersive technologies. She has published her research across a range of academic journals and books, and presented her research at many international conferences. At Ipsos Tamara is drawing on her expertise to translate academic research into scalable business practices. Recent projects that she has contributed to while at Ipsos include: Using goal setting and technology to increase physical activity in a healthcare community; Examining the psychology of technology adoption; Applying behavioural science to optimise digital experiences; Developing a model of behaviour change to better understand the barriers and enablers of secure cyber behaviour.
Dr Stephanie Mathisen is policy manager at Sense about Science, an independent charity that ensures the public interest in sound science and evidence is recognised in public debates and policymaking. Steph has just organised the first ever Evidence Week in the UK parliament, which took place 25–28 June this year. Steph works on transparency about evidence in policy and decision-making, including assessing the UK government’s performance on that front. She submits evidence to parliamentary inquiries and coordinates Sense about Science’s continuing role in the Libel Reform Campaign. In February 2017, Steph persuaded the House of Commons science and technology committee to launch an inquiry into the use of algorithms in decision- making.
Time and Venue
2pm on 26th July 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS
Pre Event Information
The speakers at this meeting will be addressing the question of the Ethics of Artificial Intelligence – is is it possible for machines to have morality? And to do this, they’ll be unpacking the hype currently surrounding the subject of AI – how much of it is justified, and how do they see these new technologies influencing human society over the coming decades? The potential of AI and its many applications needs little to spark enthusiastic intrigue and adoption. For example, when it comes to managing customer experiences, Gartner estimates that 85% of customer interactions will be managed without humans by 2020.
However, as we plough ahead with the adoption of AI, it hasn’t taken long to realise that incorporating AI into our lives needs to be handled with a careful, measured approach. Indeed, unpacking AI’s integration into our lives provides us with an opportunity – and responsibility – to ensure AI brings out the best of our humanness while mitigating our shortcomings. It is through a careful integration that the promise of AI and us can be realised to address the big challenges we face.
Tamara Ansons will look at:
•Human input in the creating of AI (relating to the coders and to AI training)
•AI and measurement (spinning off from the previous point is how AI guides our focus to the specific/measurable)
•Humanising technology (where we do humanise and where some barriers exist)
Stephanie Mathisen will address the importance of:
•Meaningful transparency around algorithms used in decision-making processes (to challenge or agree; fairness)
•Scrutiny
•Accountability
Slides
No slides available for this presentation
Tweets
#netikx93
Blog
See our blog report: Machines and Morality: Can AI be Ethical?
Study Suggestions
The SciTech Committee can be found here