Blog for July 2020 Seminar: A Library during lockdown

Antony Groves has been working at the University of Sussex for 15 years starting in a ‘front line’ role and continuing on into his current job where he is always talking to and supporting a lot of students at both undergraduate and post-graduate levels. He is a member of CILIP and blogs for the Multi Media and Information Technology Group. Antony is a reflective practitioner and believes in making things happen. As of now there are two major priorities – proactively working towards making the UoS website accessible by the government deadline of September 23rd 2020 and reactively working to make the UoS website and services as useful as possible following the Covid19# lockdown in March.

Two key ideas – accessibility and usability. Accessibility can be straightforward things such as font size, change in colour and ensuring that the keyboard is operable. For more on accessibility https://www.jisc.ac.uk/accessibility

‘Strategic approaches to implementing accessibility’, more colloquially – ‘The Kent strategy slides’. 2019 saw over a million visits to the library website, 6,170 on the busiest day – Tuesday May 14th. There has been a shift (a pivot) from physical visits to digital space. The main focus is on the user.
At this time there is a rush to open things up after lockdown without necessarily thinking about who is coming through the door and what they want now. Doing updating and coding makes you ‘removed’ from the user. Government Design Principles are a good place to start – https://www.gov.uk/guidance/government-design-principles

Now this is for everyone. You start with ‘user needs’ and you design with data. You build ‘digital services’ not websites. Remember that ‘A service is something that helps people to do something’. Iterate, then iterate again. We began by speaking to the academic community and gathering feedback. Over 100 pieces of feedback were collected and grouped into four main themes: architecture, behaviour, content and labelling. Top tasks were identified (e.g. searching for and finding books, booking rooms, accessing an account) – https://www.alistapart.com/what-really-matters-focusing-on-top-tasks/
People mainly make use of a handful of tasks so develop these first.

Architecture – “Confusing having two types of navigation”.

Behaviour – “Have never used library search tabs”.

Content – “More photos of the library and more infographics”.

Labelling – “Skills hub should have a description mentioning academic skills”.

Design with data – We benchmarked with other institutions.

We looked at Google analytics – most/least viewed pages, along with bounce and exit rates. We ran ‘card sorts’ to determine site structure. We created user stories to help edit pages. This resulted in (two examples) – the new ‘Making your research available’ section has very low bounce and exit rates, and these have also dropped across the whole site indicating that people are finding what they expect to. The ‘Find a book in the library page’ had 6,785 views compared with 1,182 in the 2018 Autumn term when it was located in the ‘Using the Library’ section.

Iteration goes on and on. There is still much to ‘unpack’ and ‘improve’. User testing is currently being organised. Usage is being analysed to see which parts of the website are seeing fewer views and less engagement. Working with teams inside and outside the UoS Library to make the digital services as useful as they can be to our community.

When Covid19# hit the UK we considered carefully how to respond. We devised a three pronged approach : Pivot / Add / Hide. ‘The Pivot’ involved moving the library from a physical presence into a digital space. For example, study rooms were no longer available and room bookings were changed into zoom bookings. ‘The Add’ meant introducing new services. There is a ‘click and study service’ starting this week whereby individuals can book a study place. There is a ’click and collect service’ and ‘Library FAQ’s’ appropriate for the period of lockdown. ‘The Hide’ concerned removing information on the website that was no longer appropriate such as ‘Information for visitors’ Instead, we created a guide to ‘Open Access Items’ and a ‘Schools Guide’.

All this work has been recognised by a ‘Customer Service Excellence’ award.

Antony is pleased that the work of the UoS Library Staff has been recognised but he takes it with a ‘pinch of salt’ as he is intent on doing more ‘user testing’ and receiving much more feedback as well as talking to his community.

In conclusion, notification of the inspirer behind this approach to digital services – “Revisiting Ranganathan : Applying the Five Laws of Library Science to Inclusive Web Design”. Ten changes we’ve made to the library website since lockdown – www.mmitblog.wordpress.com

Rob Rosset 25/07/2020

 

 

 

Blog for the September 2020 Seminar: TRIZ

TRIZ (a Russian acronym for a phrase usually translated as ‘the theory of inventive problem-solving’) is not a well-known technique in knowledge and information management circles. It is the brainchild of Genrich Altshuller, an engineer, scientist, inventor and writer – who, incidentally, paid the price for his innovative thinking style by displeasing Stalin and consequently being sent to a labour camp. However, he used his experiences there to further refine his problem-solving techniques!

TRIZ is still most widely used in the engineering field, but the TRIZ principles are applicable to any kind of problems, not just technical ones.

Ron Donaldson, NetIKX committee member and TRIZ expert at Oxford Creativity, took us through the fundamentals of TRIZ in an intensive yet enjoyable seminar, enhanced by the wonderful cartoons of Clive Goddard. The TRIZ approach is based on the principle of analogous thinking – often we limit ourselves to the solutions found within our own area of expertise, whereas in fact we could apply solutions from other domains where similar problems have been faced. The advantage of this approach is that you learn to think conceptually and to view a problem in an abstract way, rather than becoming bogged down in detail.

But, given that most of us lack this breadth of knowledge, how do we access these creative solutions? Altshuller analysed 50,000 patent abstracts to identify how the innovation had taken place. From this he developed the essential TRIZ methodology: the concept of technical contradictions, the concept of ideality of a system, contradiction matrix and the 40 principles of invention. He also modelled creative thinking tools and techniques from observing creative people at work and uncovering patterns in their thinking.

At the heart of all problems requiring an inventive solution, there is a contradiction: for example, we want something that is both strong and lightweight, but how do we increase strength without also increasing weight? The existence of a contradiction does not mean you cannot solve a problem: Ron suggested that we need to ‘channel our inner Spice Girl’ and state what we ‘really, really want’ as there is usually a way of getting it without having to change anything!

Altshuller’s research identified three characteristics of creative people: they think without constraints; they think in time and scale, and they get everything they want. When you have identified your ideal outcome, you can work ‘backwards towards reality’.

One of the TRIZ tools is the contradictions matrix, which allows you to map the contradictions inherent in your problem and to identify inventive principles to solve them. We saw examples of the principles and how they can be used in different contexts: for example, principle 13 (The Other Way Round) could involve turning an object upside-down, or making the fixed parts moveable and the moving parts fixed. TRIZ also emphasises the importance of using the resources that you have, which supports sustainability and reuse.

Ron set us two questions to consider in the breakout sessions (which luckily, we were able to replicate effectively via Zoom!): how would you use TRIZ within knowledge management? and which bits of the session really inspired you? This led to a discussion ranging across the design of tin-openers, Altshuller’s science fiction stories and the challenges of applying inventive solutions in the public sector. It is safe to say that we were all intrigued by what we had learned and keen to explore further.

TRIZ is open source and is not copyrighted – so you can try out the toolkit for yourself. The contradictions matrix, the 40 principles and other tools are free to download from the Oxford Creativity site, where you can also sign up for free webinars on TRIZ. Give it a go and unleash your genius!

By Carlin Parry

Carlin’s LinkedIn web address is : https://www.linkedin.com/in/carlinparry/

 

Blog for October 2020: Information as an Asset and the Hawley Report

In 1995, a ground-breaking report, Information as an Asset: the Board Agenda (which came to be known as The Hawley Report) was published. This report called for a recognition of corporate information as a strategic asset and laid out the responsibilities of boards to identify their information assets and to ensure that these are managed appropriately and deployed to best advantage. It was developed by a group led by Robert Hawley, the CEO of Nuclear Electric, and aimed firmly at boards and senior executives. The report itself disappeared from view for several years after publication, but it remained an important milestone in corporate knowledge and information management.

In 2017, CILIP and KPMG launched a joint programme of work to plan and deliver an updated version, which was published in February 2019 as Information as an Asset: today’s board agenda’. This was based on a survey of over 540 respondents who gave insights into their respective organisations. The authors noted several developments in the field since the original Hawley report, including the importance of AI, text and data analytics, machine learning, and robotics; the development of systems which learn faster than humans; the growth of ‘big data’; the increasing need to protect information assets; and the socio-political climate around recognition of the value and management of personal data. In early March 2020, a further report, ‘The Edge of Intelligence’, was published by the Financial Times.

Twenty-five years on from the original Hawley Report, the information landscape has changed considerably, but the need to manage information as a corporate asset is arguably greater than ever.

NetIKX was pleased to welcome Stephen Phillips, an information professional whose experience spans over 30 years and includes having been Global Head of Business Services at a leading investment bank, to provide an overview of Hawley’s legacy and the subsequent developments within corporate knowledge and information management. Stephen took us through the key themes of the three reports and posed the question of how organisations are dealing with the current COVID-19 crisis in addition to those challenges already facing them pre-pandemic. Key findings of Dell’s recent Digital Transformation Index emphasised the importance of knowledge sharing, extraction of insights from data, skills in data analysis and related disciplines and the need to make business decisions based on data in real time. This survey was undertaken in July and August 2020, so reflected concerns raised by the COVID-19 pandemic more closely.

Although some issues raised in the Digital Transformation Index are specifically related to the current crisis – such as lack of economic growth and the need for increased cybersecurity due to home working – the key themes from The Edge of Intelligence remain relevant.

Stephen went on to explore these four themes, which reflect the areas where most companies lack confidence about their competencies – limited horizon scanning, ‘lost in translation’ (bridging the gap between data science and operational expertise), technical failure, and ‘data without democracy’ (sharing market intelligence across functions). In the 2019 Information as an Asset report, market research was consistently viewed as the most reliable source of intelligence, but there are signs that this may be shifting – particularly in light of the growing importance of AI and the Internet of Things (IoT). The McKinsey COVID Response Center has produced a set of response tools for business leaders which highlight the importance of talent (a factor which was notably ranked low in responses to the FT survey) and supply-chain resilience, as well as cybersecurity and the need to re-evaluate analytics models.

Drawing on the information from these sources, Stephen then invited us to consider a proposed set of priorities for what has come to be called ‘the new normal’:

* accelerated decision-making

* horizon-scanning

* data deluge

* talent

* democratising data

* insight

* intelligence and knowledge

* ethics and integrity

This formed the basis for discussion in the breakout sessions, where we shared our own views and experiences of issues such as the risks of decision-making based on algorithms, the increased role of social media in sharing information (or disinformation!) and the continued need for us as information professionals to convince others of the commercial value of knowledge and information management. As we navigate the ‘new normal’ – whatever that may turn out to be – our skills are increasingly needed.

October 2020 Seminar : From Hawley to the Edge of Intelligence : the continuing evolution of Knowledge and Information Management to adapt to the new normal

Summary

This seminar was all about the ‘Financial Times’ survey entitled ‘The edge of intelligence’ published in early March 2020.  Stephen Phillips is a member of CILIP, the Library and Information Association, which worked with the ‘Financial Times’ to commission a survey of its readers. The purpose of this was to build a better understanding of their perception of knowledge and information management.  This exercise followed on from the release in 2019, of ‘Information as an Asset’,  an update of an earlier report from the mid-1990s issued by the Hawley Committee.  NetIKX has always been deeply involved with this work and therefore wanted to provide an overview of the new report and how it had developed from the previous Hawley Report.  Stephen also discussed the ‘Key findings of Dell’s recent Digital Transformation Index’ which emphasised the importance of knowledge sharing, extraction of insights from data, skills in data analysis and related disciplines and the need to make business decisions based on data in real time.

The seminar therefore gave the audience an opportunity to reflect on the key findings of the survey and then discuss the practical ways that Knowledge and Information Managers can help their organisations going forward, through the pandemic, constantly maintaining and building on their success.

Speaker:

Stephen Phillips.
Stephen is the owner of Smart IM Ltd which he has created following on from a highly successful thirty year career in the financial services industry where he developed and implemented KM and IM strategy, 3rd party data sourcing and management and onshore/offshore staffing models. He has a track record of innovative use of technology to shape and manage workflows, 3rd party inventory, entitlements, credentials and usage tracking, machine translation and KM platforms. Smart IM provides strategic support to deliver these capabilities, including research, information and knowledge management solutions.

Stephen believes that world class Knowledge and Information Management enables good decision making and competitive advantage. Organisations can be helped to address four strategic imperatives : to increase revenues; to reduce costs; to mitigate risks and to comply with legal and regulatory obligations.
Stephen is 2021 Conference Chair for SLA Europe and Vice Chair of CILIP’s Knowledge and Information Management Member Network. He has previously served as President of the European Chapter of SLA and actively contributes to industry events and journals.

In his spare time, Stephen is a school trustee, he is a keen golfer, he likes to vacation in Portugal with his family and he enjoys long walks with Reggie, his Airedale terrier “discussing life, the universe and everything” …

Time and Venue

22nd October 2020 at 2:30 pm on the Zoom platform. This is a virtual session.

Slides

Will be made available after the session for members only.

Tweets

#netikx106

Blog

Please see our Blog post here….

Study Suggestions

1995 Information as an Asset : the board agenda – https://cilip.org.uk/informationasset
2019 Information as an Asset Today’s board Agenda – https://cilip.org.uk/informationasset
2020 The Edge of Intelligence – https://intelligence.ft.com
Covid Impact on The Workplace – https://www.thebcfa.com/bcfa-covid-19-impact-survey
McKinsey : COVID Response Center – https://www.mckinsey.com/about-us/covid-response-center/home
Dell Digital Transformation Index – https://www.delltechnologies.com/en-us/perspectives/digital-transformation-index.htm

Blog for November 2020: Framework and ISO standards for Collaboration, KM and Innovation

At first glance it may seem counterintuitive to have standards for innovation and collaboration – these are, after all, things which many people perceive as happening organically and spontaneously:  the myth of creativity as a ‘Eureka moment’ is still prevalent, despite evidence to the contrary. Standards are often viewed as being imposed by authority and making work processes more cumbersome and bureaucratic. In this seminar, Ron Young of Knowledge Associates outlined how standards can in fact provide a framework for creativity and innovation and how they can be applied within an organisation.

Ron began by outlining the need for standards in knowledge management, starting with the 1998 white paper on ‘UK competitiveness in the global knowledge-driven economy’, a high-level strategy for the UK which acknowledged that effective collaboration, co-creation, knowledge and innovation were difficult to copy and were therefore key to global competitiveness and sustainability. As humans, we like to collaborate and share, but trust needs to be in place for this to succeed. Trusted partnerships and a collaborative business model are vital. The development of blockchain technology is relevant here as it provides a decentralised trust model for the exchange of information. Ron reminded us that trusted systems are as important as trusted people.

The importance of collaboration was illustrated by a number of examples of international projects, ranging from the establishment of the first Europe-wide KM team in 1999, the first pan-European KM conference in 2000 and the first global KM community of practice in 2001, through to the publication of the global KM standard, ISO 30401 in 2018. This standard was also adopted by the European Space Agency as the basis for its knowledge management governance framework.

We then learned more about the published standards ISO 44001 (collaborative partnerships), ISO 30401 (knowledge management) and ISO 56002 (innovation management) as well as the way in which these, along with ISO 55001 (intangible asset management) and ISO 27001 (information security) all fit together to form a common framework for knowledge- and information-driven thought leadership. Knowledge asset management (the ‘Internet of Assets’) is fundamental to achieving organisational objectives, but ethical considerations are also crucial, especially as we enter a world dominated by artificial intelligence. This has been recognised by the IEEE in their work on ethically aligned design. Ron pointed out that all ISO standards now ensure that the principles of the standard are embedded in the standard itself. As technologies, processes and people all change over time, principles remain the same and provide a reminder of why we are ‘doing’ KM. We need to make sure that knowledge is transformative and strategic and to build a ‘virtuous spiral’ of knowledge.

As is traditional at NetIKX seminars, the talk was followed by syndicate sessions (replicated in Zoom by breakout rooms) during which we discussed the issues covered in Ron’s presentation, including our own experiences of using and applying standards, the ethical implications of artificial intelligence and the importance of keeping the ‘human in the loop’ in KM processes in which algorithms and machine learning are incorporated. We were all impressed by the way in which Ron made a potentially dry subject so interesting and relevant to everyday KM practice.

Rob Rossett

 

Blog for May 2020: Gurteen knowledge cafe

How do we thrive in a hyper-connected, complex world?

An afternoon of conversation with David Gurteen

There was a great start to this Zoom meeting. David Gurteen gave some simple guidance to participants so we could all Zoom smoothly.  It was great best practice demo. We are all becoming good at Zoom but simple guidance on how to set the visuals, and mute the sound is a wise precaution to make sure we are all competent with the medium. He also set out how the seminar would be scheduled, with breakout groups and plenaries. It was to be just like a NetIKX seminar in the BDA meeting room, even though it was totally different! I felt we were in very safe hands, as David was an eary adopter of Zoom, but still recognizes that new people will benefit by clarity of what works best. Well done David.

The introduction set the scene for the content of our café.  We were looking at how we live in a hyper-connected complex rapidly evolving world. David outlined many dimensions to this connectedness, including transport changes, internet, social media, global finances…

In his view; over the last 75 years this increased connectivity has led to massive complexity, and today we can conceive of two worlds – an old world before the Second World War and a new world that has emerged since 1945.  Not only are our technological systems complex, but we human beings are immensely complex, non-rational, emotional creatures full of cognitive biases. This socio-technical complexity together with our human complexity has resulted in a world that is highly volatile, unpredictable, confusing, and ambiguous. Compare the world now, with the locally focused world that dominated the pre-war years.

Furthermore, this complexity is accelerating as we enter the fourth industrial revolution in which disruptive technologies and trends such as the Internet of Things, robotics, virtual reality, and artificial intelligence are rapidly changing the way we live and work. Our 20th-century ways of thinking about the world and our old command and control, hierarchical ways of working no longer serve us well in this complex environment.

Is it true that if we wish to thrive, we need to learn to see the world in a new light, think about it differently, and discover better ways in which to interact and work together?

Break out groups

With practiced expertise, David set us up into small break-out groups that discussed the talk so far.  Did we agree, or feel continuity was a stronger thread than change. Then we swapped groups to take the conversation on further.

Leadership

After the break-out groups, David looked at the two linked ideas behind Conversational Leadership.  He had some wonderful quotes about leadership.  Was the old control and lead model gone?  Do leaders have to hold a specific role, or can we all give leadership when the opportunity is there?  Of course, David provided examples of this, but perhaps after the seminar a very powerful example stands out – the 22 year old footballer changing the mind of a government with an 80 seat majority! You don’t need to have the expected ‘correct’ label to be a powerful leader.

Conversation

We also looked at the other element: talking underpins how we work together. Using old TV clips and quotes, David urged us to consider how we communicate with each other, and if there is scope to change the world through talking?  Again, there was plenty of food for thought as we consider new ideas such as ‘unconscious bias’, ‘media bubbles’, ‘fake news’ and the global reach of social media.

We then broke into small groups again, to take the conversation further, using David’s talk as a stimulus.

Plenary.

At the end of the break-out groups, we re-joined as a mass of faces smiling out of the screen, ready to share our thoughts.   It is a wonderful thing, when you make a point to see heads nodding across the Zoom squares.  I recommend this to anyone who has not tried it!!!

Some themes emerged from the many small group chats.  One was the question of the fundamental nature of change.  Was our world so different when the humans within it remain very much the same?  We looked very briefly at what we think human nature is and whether it remains a constant despite the massively different technology we use on a daily basis.   Even if humans are the same fallible clay, the many practical ways we can now communicate gives us much more potential to hear and be heard.

We also considered the role of trust. In our workplaces, trust often seems to be in short supply, but it is a key to leaders taking on authority without becoming authoritarian. The emphasis on blame culture and short-term advangabe has to be countered with building genuine trust.

Is there potential for self-governing teams? The idea sounds inviting but would not ensure good leadership or sharing of ideas.  The loudest voice might still monopolise attention. And with some justification, as not everyone wants to be pro-active. Some prefer to follow as their choice, and others like to take part but balk at the tedium of talking through every minute decision!  This idea may have potential, but we agreed it would not be a panacea.

We did agree that roles and rules could be positive to help give shape to our working lives, but that they need not constrict our options to lead when the time comes.  And we can see the leadership role that our professional calling suggests.   With so many new information channels, so many closed groups and so many conflicting pressures, as information or knowledge professionals, we can take a leadership role in helping and supporting our chosen groups of very human work colleagues to understand and thrive in this complex and evolving world. Conversational Leadership should be one of the tools we take away to enable our work with colleagues.

Final Notes:

The NetIKX team.

NetIKX is a community of interest based around Knowledge and Information Professionals. We run 6 seminars each year and the focus is always on top quality speakers and the opportunity to network with peers. We are delighted that the Lockdown has not stopped our seminars taking place and expect to take Zoom with us when we leave lockdown! You can find details of how to join the lively NetIKX community on our Members page.

Our Facilitator

David Gurteen is a writer, speaker, and conversational facilitator. The focus of his work is Conversational Leadership – a style of working where we appreciate the power of conversation and take a conversational approach to the way that we connect, relate, learn and work with each other.  He is the creator of the Knowledge Café – a conversational process to bring a group of people together to learn from each other, build relationships and make a better sense of a rapidly changing, complex, less predictable world. He has facilitated hundreds of Knowledge Cafés and workshops in over 30 countries around the world over the past 20 years. He is also the founder of the Gurteen Knowledge Community – a global network of over 20,000 people in 160 countries.  Currently, he is writing an online book on Conversational Leadership. You can join a Knowledge Café if you consult his website.

Blog for July 2019: Content strategy

The speakers for the NetIKX meeting i July 2019 were Rahel Baillie and Kate Kenyon.  Kate promised they would explain what Content Strategy is and what it isn’t and how it relates to the work of Knowledge Management Professionals.   The two speakers came at this work from different backgrounds.  Rahel deals with technical systems while Kate trained as a journalist and worked at the BBC.

Managing content is different from managing data.   Content has grammar and it means something to people.  Data such as a number in a field is less complex to manage.  This is important to keep in mind because businesses consistently make the mistake of trying to manage content as if it were data.  Content strategy is a plan for the design of content systems. A content system can be an organic, human thing.  It isn’t a piece of software, although it is frequently facilitated by software.  To put together a content strategy, you have to understand all the potential content you have at hand.  You want to create a system that gives repeatable and reliable results.  The system deals with who creates content, who checks it, who signs it off and where it is going to be delivered.   The system must govern the management of content throughout its entire lifecycle.  The headings Analyse, Collect, Manage and Deliver can be useful for this.

Kate pointed out that if you are the first person in your organisation to be asked to look at content strategy, you might find yourself working in all these areas, but in the long run, they should be delegated to the appropriate specialists who can follow the Content Strategy plan.  In brief, the first part of content strategy is to assess business need, express it in term of business outcomes and write a business case.   It is part of the job to get a decent budget for the work!  When you have a clear idea of what the business wants to achieve, the next question is – where are we now?  What should we look at?  You will need to audit current content and who is producing it and why.  Assess the roles of everyone in the content lifecycle – not just writers and editors but also who commissions, creates and manages it as well as who uploads it, and archives it.   Then look at the processes that enable this.  Benchmark against standards to see if the current system is ‘good enough’ for purpose.  Define the scope of your audit appropriately.  The audit is not a deliverable, though vital business information may emerge.  It is to help you see priorities by perhaps doing Gap Analysis.  Then create a requirements matrix which helps clarify what is top priority and what not.

From this produce a roadmap for change and each step of the way keep the business on side.  A document signed off by a Steering Committee is valuable to ensure the priorities are acknowledged by all!

The discussion that followed considered the work in relation to staff concerns.  For example people might be scared at the thought of change, or worried about their jobs.   It was great to have such experienced speakers to meet concerns that were raised.  The meeting ended with Kate demonstrating some of the positive outcomes that could be achieved for organisations.  There is huge potential for saving money and improving public facing content.

This is taken from a report by Conrad Taylor.  To see the full report, follow this link: Content Strategy

Blog for March 2019 Seminar: Open Data

The speaker at the NetIKX seminar in March 2019 was David Penfold, a veteran of the world of electronic publishing who also participates in ISO committees on standards for graphics technology.  He has been a lecturer at the University of the Arts London and currently teaches Information Management in a publishing context.

David’s talk looked at the two aspects of Open Data.  The most important thing for us to recognise is Data as the foundation and validation of Information.  He gave a series of interesting historical examples and pointed out that closer to the present day, quantum theory, relativity and much besides all developed because the data that people were measuring did not fit the predictions that earlier theoretical frameworks suggested.  A principle of experimental science is that if the data from your experiments don’t fit the predictions of your theories, it is the theories which must be revisited and reformulated.

David talked about some classificatory approaches. He mentioned the idea of a triple, where you have an entity, plus a concept of property, plus a value.  This three-element method of defining things is essential to the implementation of Linked Data.  Unless you can stablish relationships between data elements, they remain meaningless, just bare words or numbers.  A number of methods have been used to associate data elements with each other and with meaning.  The Relational Database model is one.  Spreadsheets are based on another model and the Standard Generalised Markup Language (and subsequently XML) was an approach to giving structure to textual materials.  Finally, the Semantic Web and the Resource Description Framework have developed over the last two decades

Moving on to what it means for data to be Open.  There are various misconceptions around this – it does not mean Open Access, a term used within the worlds of librarian ship and publishing to mean free-of-charge access, mainly to academic journals and books.  We are also not talking about Open Archiving, which has a close relationship to the Open Access concept.  Much of the effort in Open Archiving goes into developing standardised metadata so that archives can be shared.  Open data is freely available.  It is often from government but could be from other bodies and networks and even private companies.

We then watched a short piece of video showing Sir Nigel Shadbolt, in 2012, who was a founder of the Open Data Institute, which set up the open data portals for the UK government.  He explains how government publication of open data, in the interests of transparency is not found in many countries and at national, regional and local level.  The benefits include improved accountability, better public services, improvement in public participation, improved efficiency, creation of social value and innovation value to companies.

We heard about examples of Open Data, for example Network Rail publishes open data and benefits through improvements in customer satisfaction.  It says that its open data generates technology related jobs around the rail sector and saves costs in information provision when their parties invest in building information apps based on that data.    The data is used by commercial users too, but also the rail industry and Network Rail itself.   The data can also be accessed by individuals and academia.

Ordnance Survey open data is important within the economy and in governance.  David uses one application in his role as Chair of the Parish Council in his local village. The data allows them to see Historic England data for their area, and Environment Agency information showing sites of special scientific importance or areas of outstanding natural beauty.

After the tea-break, David showed three clips from a video of a presentation by Tim Berners-Lee.  David then explained how the Semantic Web works.  It is based on four concepts: a) metadata; b) structural relationships; d) tagging; d) the Resource Description Framework method of coding which in turn is based on XML.

The Open Data Institute has developed an ‘ethics canvas’, which we looked at to decide what we thought about it.  It gives a list of fifteen issues which may be of ethical concern.  We discussed this in our table groups and this was followed by a general discussion.  There were plenty of examples raised from our collective experience, which made for a lively end to the seminar.

This is taken from a report by Conrad Taylor

To see the full report follow this link: Conradiator : NetIKX meeting report : Open Data

Blog for January 2019: Wikipedia & knowledge sharing

In January 2019, NetIKX held a seminar on the topic – Wikipedia and other knowledge-sharing experiences.  Andy Mabbett gave a talk about one of the largest global projects in knowledge gathering in the public sphere; Wikipedia and its sister projects.  Andy is an experienced editor of Wikipedia with more than a million edits to his name.  He worked in website management and always kept his eyes open for new developments on the Web.  When he heard about the Wikipedia project, founded in 2001, he searched there for information about the local nature reserves.  He is a keen bird-watcher.  There was nothing to be found and this inspired him to add his first few entries.  He has been a volunteer since 2003 and makes a modest living with part of his income stream coming from training and helping others become Wikipedia contributors too.  The volunteers are expected to write publicly accessible material, not create new information.  The sources can be as diverse and scattered as necessary, but Wikipedia pulls that information together coherently and give links back to the sources.

The Wikipedia Foundations which hosts Wikipedia says: ‘imagine a world in which every single human being can freely share in the sum of all knowledge.  That is our commitment.’

Wikipedia is the free encyclopaedia that anybody can edit.  It is built by a community of volunteers contributing bit by bit over time.  The content is freely licensed for anybody to re-use, under a ‘creative commons attribution share-alike’ licence.  You can take Wikipedia content and use it on your own website, even in commercial publications and all you have to do in return is to say where you got it from.  The copyright in the content remains the intellectual property of the people who have written it.

The Wikimedia Foundation is the organisation which hosts Wikipedia.  They keep the servers and the software running.  The Foundation does not manage the content.  It occasionally gets involved over legal issues for example, child protection but otherwise they don’t set editorial policy or get involved in editorial conflicts.  That is the domain of the community.

Guidelines and principles.

Wikipedia operates according to a number of principles called the ‘five pillars’.

  • It is an encyclopaedia which means that there are things that it isn’t: it’s not a soap box, nor a random collection of trivia, nor a directory.
  • It’s written from a neutral point of view, striving to reflect what the rest of the world says about something.
  • As explained, everything is published under a Creative Commons open license.
  • There is a strong ethic that contributors should treat each other with respect and civility. That is the aim, although Wikipedia isn’t a welcoming space for female contributors and women’s issues are not as well addressed as they should be.  There are collective efforts to tackle the imbalance.
  • Lastly there is a rule that there are no firm rules! Whatever rule or norm there is on Wikipedia, you can break it if there is a good reason to do so.  This does give rise to some interesting discussions about how much weight should be given to precedent and established practice or whether people should be allowed to go ahead and do new and innovative things.

In Wikipedia, all contributors are theoretically equal and hold each other to account. There is no editorial board, there are no senior editors who carry a right of overrule or veto.  ‘That doesn’t quit work in theory’ says Andy, ‘but like the flight of the bumblebee, it works in practice’.  For example, in September 2018, newspapers ran a story that the Tate Gallery had decided to stop writing biographies of artists for their Website.  They would use copies of Wikipedia articles instead.  The BBC does the same, with biographies of musicians and bands on their website and also with articles about species of animals.  The confidence of these institutions comes because it is recognised that Wikipedians are good at fact-checking and that if errors are spotted or assertions made without a supporting reliable reference they get flagged up.   But there are some unintended consequences too.  Because dedicated Wikipedians have the habit of checking articles for errors and deficits, Wikipedia can be a very unfriendly place for new and inexperienced editors.  A new article can get critical ‘flags to show something needs further attention.  People can get quite zealous about fighting conflicts of interest, or bias or pseudo-science.

For most people there is just one Wikipedia.  But there are nearly 300 Wikipedias in different languages.  Several have over a million articles, some only a few thousand. Some are written in a language threatened with extinction and they constitute the only place where a community of people is creating a website in that language, to help preserve it as much as to preserve the knowledge.

Wikipedia also has a number of ‘sister projects’.  These include:

  • Wiktionary is a multi-lingual dictionary and thesaurus.
  • Wikivoyage is a travel guide
  • Wikiversity has a number of learning models so you can teach yourself something.
  • Wikiquote is a compendium of notable and humorous quotations.

Probably the Wikidata project is the most important of the sister projects, in terms of the impact it is having and its rate of expansion.  Many Wikipedia articles have an ‘infobox’ on the right side.  These information boxes are machine readable as they have a microformat mark-up behind the scenes.  From this came the idea of gathering all this information centrally.  This makes it easier to share across different versions of Wikipedia and it means all the Wikipedias can be updated together, for example, if someone well known dies.  Under their open licence, data can be used by any other project in the world.  Using the Wikidata identifiers for millions of things, can help your system become more interoperable with others.   As a result, there is a huge asset of data including that taken from other bodies (for example English Heritage or chemistry databases etc.

Wikipedia has many more such projects that Andy explained to us and the information was a revelation to most of us.  So we were then delighted to spend some time looking at an exercise in small groups.  This featured two speakers who talked about the way they had used a shared Content Management system to gather and share knowledge.  These extra speakers circulated round the groups to help the discussions.  The format was different to NetiKX usual breakout groups but feedback from participants was very positive.

This blog is based on a report by Conrad Taylor.

To see the full report you can follow this link: Conradiator : NetIKX meeting report : Wikipedia & knowledge sharing

 

Blog for the November 2018 seminar: Networks

The rise of on-line social network platforms such as Facebook has made the general population more network-aware. Yet, at the same time, this obscures the many other ways in which network concepts and analysis can be of use. Network Science was billed as the topic for the November 2018 NetIKX seminar, and in hopes that we would explore the topic widely, I did some preliminary reading.

I find that Network Science is perhaps not so much a discipline in its own right, as an approach with application in many fields – analysis of natural and engineered geography, transport and communication, trade and manufacture, even dynamic systems in chemistry and biology. In essence, the approach models ‘distinct elements or actors represented by nodes (or vertices) and the connections between [them] as links (or edges)’ (Wikipedia), and has strong links to a branch of mathematics called Graph Theory, building on work by Euler in the 18th century.

In 2005, the US National Academy of Sciences was commissioned by the US Army to prepare a general report on the status of Network Science and its possible application to future war-fighting and security preparedness: the promise was, that if the approach looked valuable, the Army would put money into getting universities to study the field. The NAS report is available publicly at http://nap.edu/11516 and is worth a read. It groups the fields of application broadly into three: (a) geophysical and biological networks (e.g. river systems, food webs); (b) engineered networks (roads, electricity grid, the Internet); and (c) social networks and institutions.

I’ve prepared a one-page summary, ‘Network Science: some instances of networks and fields of complex dynamic interaction’, which also lists some further study resources, five books and an online movie. (Contact NetIKX if you want to see this). In that I also note: ‘We cannot consider the various types of network… to be independent of each other. Amazon relies on people ordering via the Internet, which relies on a telecomms network, and electronic financial transaction processing, all of which relies on the provision of electricity; their transport and delivery of goods relies on logistics services, therefore roads, marine cargo networks, ports, etc.’

The NetIKX seminar fell neatly into two halves. The first speaker, Professor Yasmin Merali of Hull University Business School, offered us a high-level theoretical view and the applications she laid emphasis on were those critical to business success and adaptation, and cybersecurity. Drew Mackie then provided a tighter focus on how social network research and ‘mapping’ can help to mobilise local community resources for social welfare provision.

Drew’s contribution was in some measure a reprise of the seminar he gave with David Wilcox in July 2016. Another NetIKX seminar which examined the related topics of graph databases and linked data graphs is that given by Dion Lindsay and Dave Clarke in January 2018.

Yasmin Merali noted that five years ago there wasn’t much talk about systems, but now it is commonplace for problems to be identified as ‘systemic’. Yet, ironically, Systems Thinking used to be very hot in the 1990s, later displaced by a fascination with computing technologies. Now once again we realise that we live in a very complex and increasingly unpredictable world of interactions at many levels; where the macro level has properties and behaviours that emerge from what happens at the micro level, without being consciously planned for or even anticipated. We need new analytical frameworks.

Our world is a Complex Adaptive System (CAS). It’s complex because of its many interconnected components, which influence and constrain and feed back upon each other. It is not deterministic like a machine, but more like a biological or ecological system. Complex Adaptive Systems are both stable (persistent) and malleable, with an ability to transform themselves in response to environmental pressures and stimuli – that is the ‘adaptive’ bit.

We have become highly attuned to the idea of networks through exposure to social media; the ideas of ‘gatekeepers’, popularity and influence in such a network are quite easy to understand. But this is selling short the potential of network analysis.

In successful, resilient systems, you will find a lot of diversity: many kinds of entity exist and interact within them. The links between entities in such systems are equally diverse. Links may persist, but they are not there for ever, nor is their nature static. This means the network can be ‘re-wired’, which makes adaptation easier.

Amazing non-linear effects can emerge from network organisation, and you can exploit this in two ways. If adverse phenomena are encountered, the network can implement a corrective feedback response very quickly (for example, to isolate part of the network, which is the correct public health response in the case of an epidemic). Or, if that reaction isn’t going to have the desired effect, we can try to re-wire the network, dampening some feedback loops, reinforcing others, and thus strengthening those ‘constellations’ of links which can best rise to the situation.

Information flows in the network. Yasmin offered us as analogy, the road network system, and distinct to that, the traffic running across that network. People writing about the power of social media have been concentrating on the network structure (the nodes, and the links), but not so much on the factors which enable or inhibit different kinds of dynamic within that structure.

Networks can enable efficient utilisation of distributed resources. We can also see networks as the locus where options are generated. Each change in a network brings about new conditions. But the generative capacity does come at a cost: you must allow sufficient diversity. Even if there are elements which don’t seem useful right now, there is a value in having redundant components: that’s how you get resilience.

You might extend network thinking outwards, beyond networking within one organisation, towards a number of organisations co-operating or competing with each other. Some of your potential partners can do better in the current system and with their resources than you; in another set of circumstances, it might be you who can do better. If we can co-operate, each tackling the risks we are best able to cope with, we can spread the overall risk and increase the capability pool.

Yasmin referred to the idea of ‘Six Degrees of Separation’ – that through intermediate connections, each of us is just six link-steps away from anybody else. The idea was important in the development of social network theory, but it turns out to have severe limitations, because where links are very tenuous, the degree of access or influence they imply can be illusory. That’s why simplistic social network graphs can be deceptive.

In a regular ‘small worlds’ network, everyone is connected to the same number of people in some organised way, and even one extra random link shortens the path length. It’s possible to ‘re-wire’ a network to get more of these small-world effects, with the benefit of making very quick transitions possible.

But there is another kind of network, similar in structure to the Internet and most of the biological systems we might consider – and that’s what we can call the ‘scale-free’ network. In this case, there is no cut-off limit to how large, or how well-connected a node can be.

Networks are also ‘lumpy’ – in large networks, there are very large hubs, but also adjacent less-prominent hubs, which in an Internet scenario are less likely to be attacked or degraded. This gives some hope that the system as a whole is less likely to be brought to its knees by a random attack; but a well-targeted attack against the larger hubs can indeed inflict a great deal of damage. This is something that concerns security-minded designers of networks for business. It is strategically imperative to have good intelligence about what is going on in a networked system – what are the entities, which of them are connected, and what is the nature of those connections and the information flows between them.

It’s important to distinguish between resilience and robustness. Resilience often comes from having network resources in place which may be redundant, may appear to be superfluous or of marginal value, but they provide a broader option space and a better ability to adapt to changing circumstance.

Looking more specifically at social networks, Yasmin referred to the ‘birds of a feather flock together’ principle, where people are clustered and linked based on similar values, aspirations, interests, ways of thinking etc. Networks like this are often efficient and fast to react, and much networking in business operates along those lines. However, within such a network, you are unlikely to encounter new, possibly valuable alternative knowledge and ways of thinking.

Heterogeneity of linkages may propagate along weaker links, but are valuable for expanding the knowledge pool. Expanded linkages may operate along the ‘six degrees’ principle, and through intermediate friends-of-friends, who serve both as transmitters and as filters. And yet a trend has been observed for social network engines (such as Facebook) to create a superdominance of ‘birds of a feather’ types of linkages, leading to confirmation bias and even polarisation.

In traditional ‘embodied’ social networks, people bonded and transacted with others whom they knew in relatively persistent ways, and could assess through an extended series of interactions in a broadly understandable context. In the modern cybersocial network, this is more difficult to re-create, because interactions occur through ‘shallow’ forms such as text and image – information is the main currency – and often between people who do not really know each other.

Another problem is the increased speed of information transfer, and decreased threshold of time for critical thought. Decent journalism has been one of the casualties. Yes, ‘citizen journalism’ via tweet or online video post can provide useful information – such informants can often go where the traditional correspondent could not – but verification becomes problematic, as does getting the broader picture, when competition between news channels to be first with the breaking story ‘trumps’ accuracy and broader context.

If we think of cybersocial networks as information networks, carrying information and meaning, things become interesting. Complexity comes not just from the arrangement of links and nodes, but also from the multiple versions of information, and whether a ‘message’ means the same to each person who receives it: there may be multiple frameworks of representation and understanding standing between you and the origin of the information.

This has ethical implications. Some people say that the Internet has pushed us into a new space. Yasmin argues that many of the issues are those we had before, only now more intensely. If we think about the ‘gig economy’, where labour value is extracted but workers have scant rights – or if we think about the ownership of data and the rights to use it, or surveillance culture – these issues have always been around. True, those problems are now being magnified, but maybe that cloud has a silver lining in forcing legislators to start thinking about how to control matters. Or is it the case that the new technologies of interaction have embedded themselves at such a fundamental level that we cannot shift them?

What worries Yasmin more are issues around Big Data. As we store increasingly large, increasingly granular data about people from sources such as fitbits, GPS trackers, Internet-of-Things devices, online searches… we may have more data, but are we better informed? Connectivity is said to be communication, but do we understand what is being said? The complexity of the data brings new challenges for ethics – often, you don’t know where it comes from, what was the quality of the instrumentation, and how to interpret the data sets.

And then there is artificial intelligence. The early dream was that AI would augment human capability, not displace it. In practice, it looks as if AI applications do have the potential to obliterate human agency. Historically, our frameworks for how to be in the world, how to understand it, were derived from our physical and social environment. Because our direct access to the physical world and the raw data derived from it is compromised, replaced by other people’s representation of other people’s possible worlds, we need to figure out whose ‘news’ we can trust.

When we act in response to the aggregated views of others, and messages filtered through the media, we can end up reinforcing those messages. Yasmin gave as an example rumours of the imminent collapse of a bank, causing a ‘bank run’ which actually does cause the bank’s collapse (in the UK, an example was the September 2007 run on Northern Rock). She also recounted examples of the American broadcast media’s spin on world events, such as the beginning of the war in Iraq, and 9/11. People chose to tune into to those media outlets whose view of the world they preferred. (‘Oh honey, why do you watch those channels? It’s so much nicer on Fox News.’

There is so much data available out there, that a media channel can easily find provable facts and package them together to support its own interpretation of the world. This process of ‘cementation’ of the silos makes dialogue between opposed camps increasingly difficult – a discontinuity of contemporaneous worlds. This raises questions about the way our contextual filtering is evolving in the era of the cybersocial. And if we lose our ‘contextual compass’, interpreting the world becomes more problematic.

In Artificial Intelligence, there are embedded rules. How does this affect human agency in making judgements? One may try to inject some serendipity into the process – but serendipity, said Yasmin, is not that serendipitous.

Yasmin left us with some questions. Who controls the network, and who controls the message? Should we be sitting back, or are their ethical considerations that mean we should be actively worrying about these things and doing what we can? What is it ethical not to have known, when things go wrong?

 

Drew Mackie prepares network maps for organisations; most of the examples he would give are in the London area. He declared he would not be talking about network theory, although much is implied, and underlies what he would address.

Mostly, Drew and his associates work with community groups. What they seek to ‘map’ are locally available resources, which may themselves be community groups, or agencies. In this context, one way to find out ‘where stuff is’ is to consult some kind of catalogue, such as those which local authorities prepare. And a location map will show you where stuff is. But when it comes to a network map, what we try to find out and depict is who collaborates with whom, across a whole range of agencies, community groups, and key individuals.

When an organisation commissions a network map from Drew, they generally have a clear idea of what they want to do with it. They may want to know patterns of collaboration, what assets are shared, who the key influencers are, and it’s because they want to use that information to influence policy, or to form projects or programmes in that area.

Drew explained that the kinds of network map he would be talking about are more than just visual representations that can be analysed according to various metrics. They are also a kind of database: they hold huge amounts of data in the nodes and connections, about how people collaborate, what assets they hold, etc. So really, what we create is a combination of a database and a network map, and as he would demonstrate, software can help us maintain both aspects.

If you want to build such a network map, it is essentially to appoint a Map Manager to control it, update it, and also promote it. Unless you generate and maintain that awareness, in six months the map will be dead: people won’t understand it, or why it was created.

Residents in the area may be the beneficiaries, but we don’t expect them to interact with the map to any great extent. The main users will be one step up. To collect the information that goes into building the map, and to encourage people to support the project, you need people who act as community builders; Drew and his colleagues put quite a lot of effort in training such people.

To do this, they use two pieces of online software: sumApp, and Kumu. SumApp is the data collection program, into which you feed data from various sources, and it automatically builds you a network map through the agency of Kumu, the network visualisation and analytics tool. Data can be exported from either of these.

When people contribute their data to such a system, what they see online is the sumApp front end; they contribute data, then they get to see the generated network map. No-one has to do any drawing. SumApp can be left open as a permanent portal to the network map, so people can keep updating their data; and that’s important, because otherwise keeping a network map up to date is a nightmare (and probably won’t happen, if it’s left to an individual to do).

The information entered can be tagged with a date, and this allows a form of visualisation that shows how the network changes over time.

Drew then showed us how sumApp works, first demonstrating the management ‘dashboard’ through which we can monitor who are the participants, the number of emails sent, connections made and received, etc. So that we can experience that ourselves should we wish, Drew said he would see about inviting everyone present to join the demonstration map.

Data is gathered in through a survey form, which can be customised to the project’s purpose. To gather information about a participant’s connections, sumApp presents an array of ‘cards’, which you can scroll through or search, to identify those with whom you have a connection; and if you make a selection, a pop-up box enquires how frequently you interact with that person – in general, that correlates well with how closely you collaborate – and you can add a little story about why you connect. Generally that is in words, but sound and video clips can also be added.

Having got ‘data input’ out of the way, Drew showed us how the map can be explored. You can see a complete list of all the members of the map. If you were to view the whole map and all its connections, you would see an undecipherable mess; but by selecting a node member and choosing a command, you can for example fade back all but the immediate (first-degree) connections of one node (he chose our member Steve Dale as an example). Or, you could filter to see only those with a particular interest, or other attribute in common.

Drew also demonstrated that you can ask to see who else is connected to one person or institution via a second degree of connection – for example, those people connected to Steve via Conrad. This is a useful tool for organisations which are seeking to understand the whole mesh of organisations and other contacts round about them. Those who are keenest in using this are not policy people or managers, but people with one foot in the community, and the other foot in a management role. People such as children’s centre managers, or youth team leaders – people delivering a service locally, but who want to understand the broader ecology…

Kumu is easy to use, and Drew and colleagues have held training sessions for people about the broad principles, only for those people to go home and, that night, draw their own Kumu map in a couple of hours – not untypically including about 80 different organisations.

Drew also demonstrated a network map created for the Centre for Ageing Better (CFAB). With the help of Ipsos MORI, they had produced six ‘personas’ which could represent different kinds of older people. One purpose of that project was to see how support services might be better co-ordinated to help people as they get older. Because Drew also talked through this in the July 2016 NetIKX meeting, I shall not cover it again here.

Drew also showed an example created in Graph Commons (https://graphcommons.com/). This network visualisation software has a nice feature that lets you get a rapid overview of a map in terms of its clusters, highlighting the person or organisation who is most central within that cluster, aggregating clusters for viewing purposes into a single higher-level node, and letting you explore the links between the clusters. The developers of sumApp are planning a forthcoming feature that will let sumApp work with Graph Commons as an alternative graph engine to Kumu.

In closing, Drew suggested that as a table-group exercise we should discuss ideas for how these insights, techniques and tools might be useful in our own work situations; note these on a sheet of flip-chart paper; and then we could later compare the outputs across tables.

Conrad Taylor