Blog for July 2021 Seminar: Ethical Artificial Intelligence

This seminar dealt with the complex issue of ethical artificial intelligence and ontologies. The speaker was Ahren E. Lehnert, a Senior Manager with Synaptica LLC, a provider of ontology, taxonomy and text analytics products for 25 years – http://www.synaptica.com

The central focus of Ahren’s talk was on the relationship between ethics, artificial intelligence and ontologies. Arificial Intelligence (AI) in practice means machine learning leading to content tagging, recommendation engines and terror and crime prevention. It is used in many industries including finance and insurance, job applicants selection, development of autonomous vehicles and artistic creativity. However, we must be careful because there are some outstanding examples of ‘bots behaving badly’. For example, Microsoft’s chatbox, Tay, learned language from interaction with Twitter users. Unfortunately, Twitter ‘trolls’ taught Tay anti-semitic, racist and misogynistic language. Tay was closed down very quickly. Here we are in the territory of ‘ghosts in the machine’ – is that photo really an image of (say) Arnold Schwarzenegger (actor and politician) or is it somebody else who is posing as him or who just happens to look very much like him. More difficult is when you encounter an image of somebody that you know is dead (say) Peter Cushing (actor) whose photo may have been edited into an image that suits a particular project or viewpoint. Are we OK or not OK with these things. It does matter.

Information professionals frequently encounter machine learning – https://en.wikipedia.org/wiki/Machine_learning

Now, however much we may want to go “all in” on machine learning, most companies have not worked out how to “de-silo and clean their data”. Critically, there are five steps to predictive modelling : 1) get data; 2) clean, prepare and manipulate data; 3) train model; 4) test data; 5) improve. We must be sanguine about the results. We will not build a ‘saviour machine’ (!). Machine learning basics include : 1) the need for big data; 2) the need to look for patterns; 3) the need to learn from experience; 4) the need for good examples; 5) the need to take time. We can find good and bad examples of machine learning and we can use the examples of science fiction as portrayed in television and film. For example, ‘Star Trek’ portrays stories depicting humans and aliens serving in Starfleet who have altruistic values and are trying to apply these ideals in difficult situations. Alternatively, ‘Star Wars’ depicts a galaxy containing humans and aliens co-existing with robots. This galaxy is bound together by a mystical power known as ‘The Force’. ‘The Force’ is wielded by two major knightly orders – the Jedi (peacekeepers) and the Sith (aggressors). Conflict is endemic. So bad examples of machine learning (where machine learning fails) arise from insufficient, inaccurate or inconsistent data; finding meaningless patterns; lack of time spent by data scientists on improving machine learning models; the model is a ‘black box’ which users ‘don’t really understand’; unstructured text is difficult.

What is the source of biases which are making their way into machine learning ? Well, people generate context and people have biases to do with : language; ideas; coverage; currency and relevance. Taxonomies are constructed to reflect an organizational viewpoint. They are built from content which can be flawed. The coverage can have topical skews. They can be built by a single taxonomist or a team. The subject matter expertise can be wanting.  Furthermore, Text Analytics is ‘inherently difficult’ : language; techniques; content. Algorithms in machine learning models depend on training data which must be accurate and current with good coverage. Here is a quote from Jean Cocteau – “The course of a river is almost always disapproved of by its source”. Is the answer an ontology ?

What is ethical AI ? What does it mean ? It means being Transparent, Responsible and Accountable. Transparent – Both ML and AI outcomes are explainable.

Responsible – Avoiding the use of biased algorithms or biased data.

Accountable – Taking action ‘to actively curate data, review and test’.

FAST Track Principles – Fairness, Accountability, Sustainability, Transparency.

Whose ethics do we use – the ethics of Captain Kirk from ‘Star Trek’ or the ethics of HAL the computer from ‘2001 A Space Odyssey’. We are back with our earlier ‘Star Trek’ / ‘Star Wars’ conundrum. How will these ethics work out in practice ? How will we reach consensus. How do we define what is ethical and in what context ? Who will write the codes of conduct ? Will it be government ? Will it be business ? Who will enforce the codes of conduct ?

What are the risks given AI in practice ? Poor business outcomes; unintended consequences; mistrust of technology; weaponization of AI technology; political and/or social misinformation; deepfakes; skynet.

Steps towards ethical AI. Steps to success within the organization. Conduct risk assessments; understand social concerns; data sources and data sciences; invest in legal resources; industry and geo-specific regulatory requirements; tap into external technological expertise. There will be goals and challenges to overcome. There should be an ethical AI manifesto or guidelines. An ethical AI manifesto will identify corporate values; align with regulatory requirements; involve the entire organization; communicate the process and the results; nominate a champion. Many existing frameworks of AI Ethics guidelines are vague formulations with no enforcement mechanisms. So,to get started on the AI programme we must clearly define the problem :  what do you want to do ? Why do you want to do it ? What do you expect the outputs to be and what will you do with them ? We must seek to ‘knowledge engineer’ the data to provide a controlled perspective and construct a ‘virtuous content cycle’. We aim for a definitive source for ontologies – authoritative, accurate and objective. Pay particular attention to labelling, quality data and training data. Get the data and create trust in the consuming systems and their resulting analytics and reporting.  Use known metrics. Remember that governance applies to business and technical processes.

 

Rob Rosset 26/07/2021

 

 

 

July 2021 Seminar : Ethical Artificial Intelligence

Summary

What is ethical, or responsible, artificial intelligence (AI) ? In essence, we can identify three concerns/issues : “the moral behaviour of humans as they design, make, use and treat artificially intelligent systems.” “A concern with the behaviour of machines, in machine ethics” – for example, computational ethics. ” “The issue of a possible singularity due to superintelligent AI” – a fascinating glimpse into the future as computers might ‘take over’. Is this still science fiction ?

 https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Singularity

This seminar encompassed important topics for Knowledge Management and Information Management  practitioners. Topics included ‘bias’ in the machine, in machine learning, in the content cycle, in the taxonomy, in the text analytics, in the algorithms  and, of course, in the real world. Critically, where do knowledge organization systems fit in and how can practitioners play a role in creating ethical artificial intelligence ?  Should companies begin to develop an AI ethics strategy that is publicly available ?

Speaker

The speaker was Ahren Lehnert  – Senior Manager, Text Analytics Solutions of Synaptica.com and he is based in Oakland, California, USA. https://www.synaptica.com/

Ahren is a graduate of Eastern Michigan University in the Mid-West and a post-graduate of Stony Brook University in New York.

Ahren is a knowledge management professional passionate about knowledge capture, organisation, categorisation and discovery. His main areas of interest are text analytics, search and taxonomy and ontology construction, implementation and governance.

His fifteen years of experience spans many sectors including marketing, health care, Federal and State government agencies, commercial and e-commerce, geospatial, oil and gas, telecom and financial services.

Ahren is always seeking ways to improve the user experience through better functionality and the most ‘painless’ user experience possible based on the state of the industry, best practices and standards.

Time and Venue

Thursday July 22nd at 2:30pm  on a Zoom online meeting.

Slides

Slides available for members in the Members Hub.

Tweets

#netikx111

Blog

There is a blog available here.

Study Suggestions

Ontologies and Ethical AI | Synaptica LLC

Ethics & Bias in the Content Cycle | Synaptica LLC

 

Rob Rosset 27/07/2021

 

 

July 2018 Seminar: Machines and Morality: Can AI be Ethical?

Summary

At this meeting Stephanie Mathisen, Policy Manager at Sense About Science, and Tamara Ansons, Behavioural Science Consultant at Ipsos, addressed the question of the Ethics of Artificial Intelligence – is it possible for machines to have morality?

Speakers

Dr Tamara L Ansons is an expert on behavioural science. After receiving her PhD in Brain and Cognitive Sciences from the University of Manitoba, she did a post-doc in Marketing at the University of Michigan and then worked as an Assistant Professor of Marketing at Warwick Business School before moving to LSE to manage their Behavioural Research Lab. Her academic research focused on examining how subtle cognitive processes and contextual or situational factors non-consciously alter how individuals form judgments and behave. Much of this work has focused on how cognitive psychology can be applied to provide a deeper understanding of our interactions with technology – from online search behaviour, to social media and immersive technologies. She has published her research across a range of academic journals and books, and presented her research at many international conferences. At Ipsos Tamara is drawing on her expertise to translate academic research into scalable business practices. Recent projects that she has contributed to while at Ipsos include: Using goal setting and technology to increase physical activity in a healthcare community; Examining the psychology of technology adoption; Applying behavioural science to optimise digital experiences; Developing a model of behaviour change to better understand the barriers and enablers of secure cyber behaviour.

Dr Stephanie Mathisen is policy manager at Sense about Science, an independent charity that ensures the public interest in sound science and evidence is recognised in public debates and policymaking. Steph has just organised the first ever Evidence Week in the UK parliament, which took place 25–28 June this year. Steph works on transparency about evidence in policy and decision-making, including assessing the UK government’s performance on that front. She submits evidence to parliamentary inquiries and coordinates Sense about Science’s continuing role in the Libel Reform Campaign. In February 2017, Steph persuaded the House of Commons science and technology committee to launch an inquiry into the use of algorithms in decision- making.

Time and Venue

2pm on 26th July 2018, The British Dental Association, 64 Wimpole Street, London W1G 8YS

Pre Event Information

The speakers at this meeting will be addressing the question of the Ethics of Artificial Intelligence – is is it possible for machines to have morality? And to do this, they’ll be unpacking the hype currently surrounding the subject of AI – how much of it is justified, and how do they see these new technologies influencing human society over the coming decades? The potential of AI and its many applications needs little to spark enthusiastic intrigue and adoption. For example, when it comes to managing customer experiences, Gartner estimates that 85% of customer interactions will be managed without humans by 2020.

However, as we plough ahead with the adoption of AI, it hasn’t taken long to realise that incorporating AI into our lives needs to be handled with a careful, measured approach. Indeed, unpacking AI’s integration into our lives provides us with an opportunity – and responsibility – to ensure AI brings out the best of our humanness while mitigating our shortcomings. It is through a careful integration that the promise of AI and us can be realised to address the big challenges we face.

Tamara Ansons will look at:
•Human input in the creating of AI (relating to the coders and to AI training)
•AI and measurement (spinning off from the previous point is how AI guides our focus to the specific/measurable)
•Humanising technology (where we do humanise and where some barriers exist)

Stephanie Mathisen will address the importance of:
•Meaningful transparency around algorithms used in decision-making processes (to challenge or agree; fairness)
•Scrutiny
•Accountability

Slides

No slides available for this presentation

Tweets

#netikx93

Blog

See our blog report: Machines and Morality: Can AI be Ethical?

 

Study Suggestions

The SciTech Committee can be found here