Selling Taxonomies to organisations, Thursday July 3 2014

Blog for NetIKX  July 3rd 2014  Whatever happened to Margate?

The NetIKX meeting this month was highly popular.  I thought a session on Taxonomy might be considered dull, but I guess the hook was in the title: ‘making the business case for taxonomy’.  The session did provide great ideas for making a business case for an organisational taxonomy project, and the ideas were suitable for other contexts where direct quantifiable benefit will not be an output of the project and so immediate impact on ROI is not a simple computation.

There were two case studies presented.  The first from ‘Catalogue Queen’ Alice Laird, (ICAEW), faced the business case quandary head on.  How did they get hard headed finance to budget for their taxonomy plans?  The winning move here was to show in small scale the value of the work.  People in the business realised that the library micro-site was the best place to find things and asked why this was so.  The knowledge management team were able to demonstrate how the taxonomy could increase organisational efficiency and so helped prove the case to all website users.

This case study also provided tips for running a taxonomy project.  They used a working group from the body of the organisation, but kept the team small to ensure each person involved was clear about the relevance of the project to them and their team.  They also made the project stages clear: a consultation stage might show where there were contradictions and confusion, and so there was a following stage where the people with appropriate expertise would to step in to make firm decisions.  By setting out the stages clearly, they avoided protracted discussion and also made good use of the skills already available within their team.  In this way they fully exploited their assets! All in all, it was good to hear a crisp report about a well organised project, and we all wish them luck for their imminent implementation.

The second case study looked at using a taxonomy to help share data between different organizations in the UK Heritage sector.  In a talk called ‘Reclassify the Past’, Phil Carlisle (English Heritage) entertained us, explaining a particular problem that fuelled the need for a taxonomy project.  At one point, although the classification system worked well in most respects, some vital geographic data was not included.  As a result, a search on, for example, Margate came up with a blank, even though the data was in there.  The danger was of reputation loss – particularly with people living in Margate!  Highlighting this type of blip was another useful way to sell a structured taxonomy project.  Search, even with a good search engine is more complex than many people realise and poorly organised metadata can cause problems that ‘Google it!’ may not solve.

This case study also provided an interesting operational tip.  In order to create the best platform for sharing, this team gave away the software they were using to others in the field, as the cost was outweighed by the overall benefit of standardisation.

The session ended with a lively set of discussions.  I was with a group trying to identify more closely how a taxonomy should be classified: animal, vegetable or mineral? We found some paradoxes to play with.  For example, does a taxonomy work as a device to structure data or is a structure already in place, the basis for the taxonomy?

To conclude, it was ironic that one of the speakers commented jokingly, ‘there’s no gratitude!’  Fair comment, as basic information infrastructure projects do not usually attract riveted attention. But, at this meeting at least, where taxonomies are loved and cared for, and business case tips are welcomed, the speakers could rely on full appreciation and gratitude from a very attentive audience.

Lissi Corfield (posted by robrosset)

Graham Robertson giving feedback on his group's discussions

Graham Robertson giving feedback on his group’s discussions


Steve Dale summarising his group’s discussions

Information on the Move Seminar Tuesday May 13th Part 2

Max Whitby of Touch Press http// came to talk to @30 people attending the NetIKX seminar at the British Dental Association in Wimpole Street, following on from David Nicholas (see related blog Part 1). Max’s company specialises in creating apps which are interactive and provide information or assist in education. In other words, these apps have a point, they are not games. They have created an app of  ‘The Periodic Table’ and ‘The Solar System’ and ‘The Orchestra’. Users spend hours looking, listening and reading the annotation on these apps. For example, on the app for T.S. Eliot’s great poem “The Wasteland” , there are multiple readers including Fiona Shaw, Alec Guinness and T.S. Eliot. Three of their music apps have been nominated for an award from the Royal Philharmonic Society. Max displayed a couple of the apps on screen – one in particular caught my attention – ‘The Orchestra’. This features the instruments (looking at each instrument from every angle); the music (including the score); the conductor. Amazing.

Following on from Max’s talk we had refreshments and then divided up into two syndicate groups. These working groups addressed two different issues. “1) Taking an example of the rich functionality and content of the Touch Press app, think of an app that your organisation could develop that would engage and/or educate and/or inform its users/customers”. Syndicate 1 came up with five ideas. Members from the Ministry of Justice suggested an information app for internal use within the Ministry. This app could identify all the things that policy makers needed to know (to connect with) in order to produce proper policy. The current tools are paper documents, documents held by records management or information controlled by external contractors. It is a question of packaging up such tools and presenting them in a uniform but innovative way on an app. Members from the Institute of Energy suggested an educational app. On their current website is an interactive matrix demonstrating “The Energy Chain”. It is linked to an offsite database (massive)  held in a separate location. An app could have one part of the database in order to describe “The Energy Landscape” (a mixture of visual/text/statistics). It could be used by anyone: researchers, students, members of the public. Attendees from the Medical Defence Union came up with an app about things to avoid, in terms of risk mitigation for medical professionals. Another attendee from the Department of Health suggested two apps – one about how the body functions, with different levels of knowledge, so it can be used by health professionals and members of the public; the other app to address the issue of IT Support. This would cover everything to do with Service Management from issues with suppliers to logging all support calls in one place. It was believed that such apps would offer a richer experience than textbooks or documents.

Syndicate 2 dealt with the question “What is the role of the information professional in a disintermediated, information rich world.” They came up with the idea for today’s Information Professionals to go out into the market place. Information Professionals are competing with IT people who have no background or skills in information management. The talk was about trust and embracing traditional skills of quality assurance and quality control so that information is trusted. Such an approach calls for advocates who are very relevant for the organisation in question. Librarians were once embedded in certain organisations (like the pharmaceutical industry) but not today. This syndicate focus was on disintermediation rather than ‘information on the go’.

Steve Dale wrapped up the syndicate sessions by stating that there was always a need to evaluate the information we receive – we can’t rely on algorithms, which can be degraded. The Syndicate Sessions ended and the attendees enjoyed a glass of wine (or two) and nibbles. It was a most successful seminar. Our thanks to NetIKX ManCom for organising the Event and in particular to Suzanne Burge, Melanie Harris, Anoja Fernando and Steve Dale for running the Event on the day.

rob rosset

Information on the Move – Seminar held on Tuesday May 13th 2014 – Part 1

David Nicholas came to talk to a group of @30 NetIKX members at the h.q. of the British Dental Association in Wimpole Street. David runs CIBER a pan-European research outfit : http// He spoke about ‘The second digital transition’ which means that there will be no librarians (as we know them) by 2022. ‘The first digital revolution’ brought librarianship to its knees. This one will finish it off. It is ‘the end of culture as we know it’. ‘The first digital revolution’ took place in the office or in the library. The device – the pc – was desk bound, office bound. ‘The second digital revolution’ is taking place in the street. Mobile is now the main platform for accessing the web. Mobile means meeting information needs at the time of need. Mobiles provide access to masses of information for everyone. Smartphones and social media stride major information worlds, informal and formal.Mobiles empower digital consumer purchasing. Mobiles are fast. Mobiles are smaller devices with small screens.  They are not computational devices but access devices. Mobiles are social, personal, cool and popular.

Here are the basic characteristics of digital information seeking behaviour: ‘hyperactive’ – users love choice and looking; ‘bouncers’ – 1-2 pages from thousands; ‘promiscuous’ – about 40% don’t come back; ‘one slots’ – one visit, one page. Why is this ? Because of search engine lists/massive and changing choice/so much rubbish out there/poor retrieval skills (2.2 words per query)/multi-tasking (more pleasurable doing several things at once)/end user checking, so no memories in cyberspace and very high ‘churn rate’. The horizontal has replaced the vertical, reading is ‘out’ fast ‘media’ is in. Information seeking wise ‘skitter’ – power browse. Consequences ? Abstracts have never been so popular/scholars go online to avoid reading, prefer visual/few minutes per visit; 15 minutes is a long time/ shorter articles have a much bigger chance of being used.

Europeana mobile use : 130,000 unique mobile users accessed Europeana in last six months. Characteristics : ‘information light’, visits from mobiles much less interactive, few records, searches, less time on a visit/differences between devices (iPhone – abbreviated behaviour on part of searchers; iPad – behaviour conforms to that of pc users)/mobile use peaks at nights and weekends (desk tops peak on Wednesday and late afternoons)/searching and reading has moved into the social space. We could not have come further from the initial concept of libraries : no walls, no queuing, no intermediaries! Ask any young person about a library and they will point to their mobile. It is ironic that mobiles were once banned from libraries – now it is the library. The mobile, borderless information environment really challenges libraries and publishers. It constitutes another massive round of disintermediation and migration. The changed platform and environment transforms information consumption. Final reflection : Is the web and the mobile device making us stupid ? Where are we going with information, learning and mobile devices ?






Event report: From data and information to knowledge: the Web of tomorrow – a talk by Dr Serge Abiteboul

Some notes taken at the Milner Award Lecture by Dr Serge Abiteboul for the Royal Society on 12th November, From data and information to knowledge: the Web of tomorrow. Dr Abiteboul was awarded the 2013 Milner Award, given annually for outstanding achievement in computer science by a European researcher.

Serge Abiteboul

Dr Abiteoul’s research work focuses mainly on data, information and knowledge management, particularly on the Web. Like NetIKX members, he is interested in the transition from data to knowledge. Among many prestigious projects, he has worked on Apple’s Siri interface and Active XML, a declarative framework that harnesses web services for data integration.

In a charming French accent, he explained to us that he was going to talk about networks – networks of machines (Internet), of content (Web) and people (social media).

Nowdays information is everywhere, worldwide. Everything is big and getting bigger – the size of the digital world is estimated to be doubling every 18 months. A web search engine now is a cluster of machines – maybe a million machines. In the past getting ten machines to work together was a big challenge! Engineering achievements have enabled hundreds of thousands of computers to work together.

Dr Abiteoul’s assumptions

1. The size will continue to grow
2. The information will continue to be managed by many systems (rather than a company like Facebook taking over all the world’s information).
3. These systems will be intelligent – in the sense that they produce/consume knowledge and not simply raw data.

The 4 + 1 V’s of Big Data…

Volume, Velocity, Variety, Veracity = four difficulties of big data. There is a huge mass of data, more than can be retrieved. And it is changing fast, particularly sets of data like the stock market. Furthermore, the information on the web is uncertain, full of imprecisions and contradictions. Search engines must contend with lies and opinions, not just facts.

Dr  Abiteoul’s +1 is Value – the bottom line is, what value comes from all this data? How does a computer decide what is important to present?

Data analysis is a technical challenge as old as computer science. We know how to do it with a small amount of data; the next challenge is to do it with a huge amount. Complex algorithms will have to be designed. These will need to do low level statistical analysis, because finding the perfect statistics will take too long. Maths, informatics, engineering and hardware are all needed.

But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die. (Genesis 2.17)

People often prefer being given one answer rather than a multitude of options to sort through. When we ask another person an answer, they don’t reply by giving us twenty pages to read through, so why should we interact with machines (search engines) like that? (Note – should information professionals be very selective and choosy with the information we put forward to customers, would they prefer a reading list of five books rather than twenty?).

Machines prefer formatted knowledge, logical statements. Machines can be programmed to find patterns – e.g. Woody Allen ‘is married to’ Soon-Yi Previn. But people write that two people are married in many different ways. How does a search engine cope with all the false statements and contradictions, e.g. ‘Elvis Presley died on 16 August 1977’ and ‘The King is alive’!

The real problem with the accuracy of Wikipedia is not incorrect amateurs but paid professionals with their own agenda, paid by companies to take a particular viewpoint.

The difficulty is when to stop searching – when to find just enough right answers. Precision, the fraction of results that are correct, must be balanced between the amount of results retrieved. There is a trade off between finding more knowledge and finding the correct knowledge. Machines will have to be programmed to separate the wheat from the chaff. Knowing the good sources, the trustable sources, is a huge advantage for this.


Next, Dr Abiteoul mentioned librarians! He praised the way that a librarian may suggest you read an article that transforms your research. Or you may hear by chance a song that totally obsesses you. Computers lack this serendipity – they’re square. Information professionals take heart: there is value in chance, in browsing shelves, in the ability of your brain to make suggestions computers wouldn’t.


We cannot archive all the data we produce – there’s a lack of storage resources. How do we choose what we keep? The British Library is tackling this question through its UK Web Archive project, which involves archiving 4.8 million UK websites and one billion web pages.

The BL Web Archiving page says: “We are developing a new crowd sourcing application that will use Twitter to support an automated selection process. We envisage that in the future, automated selection of this sort will compliment manual selection by subject experts, resulting in a more representative and well-rounded set of collections.” So perhaps the web of the future will need both expert people and star computing systems.

The decisions of machines

Decisions are increasingly made by machines. For instance, automated transport systems like the Docklands Light Railway, or auto trading on the stock market. How far do we go with this, asked Dr Abiteoul. Would a machine be allowed to decide that someone is a terrorist and kill them, and if so at what level of certainty? At 90% sure? At 95% sure?

Soon machines will acquire knowledge for us, remember knowledge for us, reason for us. We should get prepared by learning informatics, so that we understand them.

There were so many ideas flying about that I was unable to note them all down! Luckily the whole lecture is freely available to watch at

Blog post by Emily Heath.

Digital native or digital immigrant – does it matter?

Karen Blakeman and Graham Coult, 28 January 2013 #NetIKX59

The first seminar of NetIKX’s new 2013-2015 programme looked at the issues we all face in a technology-driven world.  It combined two of our key themes: harnessing the web for information and knowledge exchange, and developing and exploiting information and knowledge assets and resources.

Karen Blakeman – RBA Information Services
‘Born digital: time for a rethink’

As Karen reminded us, the phrase ‘Digital immigrants’ can be traced back to Marc Prensky’s paper, ‘Digital natives, Digital Immigrants’, 2001. This paper is free to download and there is also a follow-up Part2 paper. Prensky made the argument that the US education system was no longer fit for purpose for a younger generation born with new technologies exploding around them.

Karen Blakeman speaking.

Karen Blakeman speaking.

Pre-internet, many information professionals were using subscription databases with no graphical interfaces. A lot of asking people we knew or asking other professional institutes was done back then. In contrast a wide range of innovative, imaginative search interfaces exist now:

  • ChemSpider – a free chemicals database which lets you search on a graphic, or even draw a chemical structure yourself and search on it. “Wonderful!” said Karen.
  • Mendeley –  a useful specialist search engine to find specific  forms of information, for instance patents, hearings, television broadcasts or computer programs.
  • WorldWideScience – pulls together information from a wide range of science websites and presents them in a visually appealing way.
  • – an amusing punch card style mock-up of what Google would have looked like in the 1960’s.

Karen believes that the ‘digital native’ or ‘digital immigrant’ labels are not helpful and “we have far more useful things to worry about”! Using Google effectively, producing good digital photos – none of this comes naturally to any of us – we have to learn.

The major issue for many of us is not going to be the technical side of using technology but the cost, which could lead to poorer people and those living in remote areas being excluded. Many parts of the UK still do not have broadband.

School homework is often internet based now, with students expected to carry out research online – more difficult for children who have slow internet at home or no internet access at all.

Under new government policy rules, jobseekers will soon be forced to sign up online with a job seeker’s website named Universal Jobmatch, or face losing their benefits (see this Guardian article, ‘Unemployed to be forced to use government job website’. Those without internet can use their local library – unless, of course, the library has been closed down!

The Millennials may know how to use social media, but perhaps not in a work context. We tend to have an expectation that just by using the internet regularly, the younger generation have absorbed excellent web analysis and communication skills. This is not always the case. University lecturers often report that their students lack awareness of how to assess the validity of sources and construct their own argument in an essay. Perhaps the sheer amount of information available online has resulted in too much spoon-feeding.

Ultimately Karen believes that it’s your attitude to technology that matters, not what technology you were brought up with. It’s down to personality – your level of curiosity and happiness to explore, an individual thing rather than an age thing. This is demonstrated by an interview on the BBC website with a pensioner who enjoys gaming – ‘Computer games keep me mentally active’.

Karen’s presentation is available at

Graham Coult, Editor-in-Chief, Managing Information
‘Research behaviours: the evidence base’

In support of Karen’s talk, Graham gave us an overview of research which has been undertaken into research behaviours – “Karen was the main course, I’m the pudding”. He told us he would present a “selection, even a miscellany, not exhaustive” of relevant research, taken from Emerald and ASLIB’s database of research articles.

Social media at the university: a demographic comparison’. Alice B. Ruleman, University of Central Missouri, US (2012)

In this study, Ruleman analysed the demographic differences between faculty staff and students in terms of their social media use. She found that social media is by no means a youthful obsession, with both staff and students being active users of social media, just in different ways.

Graham Coult speaking.

Graham Coult speaking.

Kilian, T., Hennigs, N. and Langner, S. (2012), “Do Millennials read books or blogs? Introducing a media usage typology of the internet generation”, Journal of Consumer Marketing, Vol. 29, No. 2, pp.114 – 124. ISSN: 0736-3761.

The author of this study sought to add to the relatively small amount of empirical research done so far on the social media use of the “Internet Generation”. They found that although social media use amongst Milliennials is generally high, Milliennials as a group are not homogeneous in their online behaviour. Using a large-scale empirical study with over 800 participants, the authors identified three different subgroups of Millennials:

  • ‘Restrained’ – relatively low tech savvy, low social media usage group
  • ‘Entertainment seeking’ – the biggest group. Using social media for entertainment, but consuming passively, rather than creating new content themselves.
  • ‘Highly connected’ – the smallest group, predominantly male, busy creating content such as blogs or videos, leading a very active digital life.

Perhaps surprisingly, ‘information seeking’ was the main reason the surveyed Milliennials gave for using social media. Facebook are planning to enhance their search capabilities through their new Facebook Search service. Who needs Google+ or indeed Google if Facebook does search? This could create a situation where large groups of Facebook users never search outside Facebook.

Vandi, C. and Djebbari, E. (2011),”How to create new services between library resources, museum exhibitions and virtual collections”, Library Hi Tech News, Vol. 28 No. 2, pp.15 – 19. ISSN: 0741-9058.

This paper discusses lots of ways to link up traditional sources using mobile technologies. There is evidence that new technologies (mobile etc) can increase use of “traditional” library services in unforeseen ways.

Graham’s conclusions:

  • There is still a great need for a trusted intermediary such as an experienced information professional. This need has probably increased rather than reduced.
  • Lack of access to technology, and lack of skill in its use, will increase disadvantages for certain user groups.
  • Editing and curating, picking out the best quality information, is likely to become a sought-after skill as information overload increases.

Graham’s presentation is available to NetIKX members at

Related links:

By Emily Heath

Social media – what next and what can we do with it?

By Elisabeth Goodman1

NetIKX’s first seminar of 2012 was its 3rd on the theme of social media in so many years.  Previous seminars have explored whether social media should be taken seriously, and how social media could be used to achieve organisational goals and the implications for organisational IM / KM policies and strategies.

This seminar took a broad look at emerging trends and products, their likely implications, and how social media are being, or could be used.

Our first speaker was Steve Dale, “a passionate community and collaboration ecologist, creating off-line and on-line environments that foster conversations and engagement”.

Our second speaker, Geoffrey Mccaleb describes himself as a social media  / mobile consultant.

This blog reviews some of the common themes arising from their presentations, points discussed in syndicate or break-out groups, and in the concluding Q&A, and some of the author’s own reflections.

Social media have been evolving into so much more than plain communication tools

Both speakers shared statistics on Facebook, Twitter, LinkedIn, Google Plus, YouTube and other social media usage.  The conclusion: the use of social media tools is enormous and growing!   But how these tools are being used, or what they are being used for is also evolving.  Here are some examples.

1.     Facts and figures on some of the better known uses of social media


89% of companies used social media for recruiting in 2011.  One in three rejected candidates based on something they saw online.  45% of companies surveyed used Twitter to find candidates, and 80% used LinkedIn.


This is perhaps one of the most publicised uses for social media.  However, although in 2011 saw 230K tweets per day about change in Egypt the CIA were blamed for ‘missing’ Egypt protests by not monitoring Twitter.  Similarly, whilst SOPA protests were being organised online, all but one of the traditional networks in the USA failed to cover them.

Reputation management

Social media is a vital medium for managing an organisation’s reputation, and yet the average time between something going viral and an official response is 48 hours.  There are some exceptions for example Southwest Airlines who actively monitor and rapidly respond to anything posted on Twitter about them with a resultant positive impact on their reputation.  To what extent are our organisations doing this?

Publicity / PR

47% of journalists used Twitter as a source in 2011 (up from 33% in 2010).  Market-specific blogs are proving to be more popular sources of news than traditional media.

Customer support

It’s all about rethinking how companies / organisations engage with their community: social media should be a “company-wide engagement model”.

2.    A broader exploration of how social media are evolving

Sharing information on interests and hobbies

Facebook lends itself well to doing this already, and is always adding new features to take this further.  It’s new timeline tool being one potential example.  There are other tools, such as ‘Pinterest’ that take sharing of this kind of information to another level.

Curation of information from multiple sources,, Storify, Flipboard are all examples of how ‘social curators’ can bring together content from several different sources that may be of interest to their audiences.  Although we did not discuss this at length, this might be a tool that Library and Information professionals could use to help their end-users with information overload?

Collaborative consumption

Some tools enable people to manage the sharing of physical resources.  Examples of this are ‘Boris’s’ bikes (the London shared bicycle scheme), sharing the use of an otherwise under-used private car, ‘airbnb’ to rent out ones house / bedrooms to visitors e.g. to the Olympics.  Might this be an alternative model for managing information resources between organisations?!

‘Managing’ big data

This is a pet subject of Steve’s, with data sets on the cloud becoming so large that they can no longer be managed with standard database management tools.  The data are usually on the cloud and include photos, traffic data, and medical data.  Visualisation and infographics tools are one way to represent and analyse these large volumes of data.


This is an interesting exploration of how the ‘game’ attributes of user engagement, loyalty to brands, and rewards might be transferred to a professional social network environment.  In a previous seminar we heard how The Open University Library Services were already experimenting with using virtual reality tools as a support for their services.  Game-ification may take this further?

Augmented reality

There are applications for golf that will let you know where the nearest bunker is and the direction of the wind.  Pointing your phone at the sky can give you information about the constellations. Augmented reality applications literally augment the information that you perceive and thereby help you to look at your world in a different way.

Location-based services

Tools such as Foursquare enable you to find out what’s near you, check-in, see who else is there, become ‘mayor’ of your local pub(!) etc.  ‘Easypark’ – is a Danish company which enables you to pay your parking fee and have a count-down to let you know how much time you have left to park.  There is potential for these tools to be so much more than a status update, because they tell others that you like something / somewhere.

3.     Some final reflections on technology trends and implications

Mobile platforms

Technology cycles are usually 10 years long, and we are now 2 years into mobile technology.  Anticipation is that mobile technology will overtake desktop technology within 5 years. And some surprising statistics:

  • More people own mobile phones than toothbrushes!
  • 371K children are born every day.  377K iPhones are sold every day!


2005 – 2010 was about design for the PC with consideration for mobile platforms; 2011 – 2012 (and beyond?) will be about design for mobile platforms with consideration for PCs.

The social graph

This represents all the people that we interact with online: who we know and who we respect online.  37% of US social media users trust what their friends say about a brand or product on social media.  60% will buy something on the basis of what their connections recommend.  Facebook (with shares / likes) and Twitter (with retweets) work on this basis, Google Plus is Google’s attempt at recreating the same thing.

What do people want?

To access their data everywhere – aka what is your cloud strategy?
To see things relevant to us – aka what is your social graph strategy?
To have the same experience regardless of our device – aka what is our mobile platform strategy?

4.    Implications of what we heard

We explored several themes in our break out discussions and in the Q&A that followed.

What is the role of information intermediaries in the context of social media?

Are we being pushed out of our roles by these tools – or does our ‘cyberlibrarian’ or ‘curator’ role become even more important?

What is the associated information risk?

With a lot of personal information going on the internet / in the cloud, is there more scope for criminal activity and identify theft?  There was concern that young people don’t appreciate the privacy issues.  That they are not receiving the education they need about this.  That tools such as Foursquare are invitations to burglars whilst we are not at home.

How to decide what tools to use and when?

The key is being clear about who we are trying to target and what tool(s) they would use.  We discussed the difficulty of changing mindsets within organisations where there are ingrained fears about the use of social media… and how using related case studies, collecting examples of what people have been saying about the organisation, or event taking unilateral action and showing the results (!) may be the way to do this.

Participants mentioned:

  • The BBC’s ‘YourPaintings’ joint initiative with The Public Catalogue Foundation and museums and public institutions throughout the UK encourages people to ‘tag’ their favourite oil paintings.  It currently has 104,000 pictures in the collection.
  • Phil Bradley’s presentation and notes: “25 barriers to using web 2.0 technologies and how to overcome them” might also provide good insights.

How / why could people use social media tools within their organisations

Chatter, Yammer are Twitter like tools being used within organisations, and in some cases have a dramatic effect on lowering the use of e-mail.  Chatter and Yammer threads are saved and searchable: and work well for organisations where people are working in different time zones.  We didn’t discuss this here, but such tools could be excellent for idea generation and problem solving, or ‘crowd-sourcing’ within an organisation.


1. Elisabeth Goodman is Programme Events Manager for NetIKX.  She also runs her own business, RiverRhee Consulting.

November 2011 Seminar: What next for the web? – a look at linked data and semantic search


This was a joint event, run with the Information for Energy Group (IFEG) and hosted at The Energy Institute.  The session addressed the issue of linked data and the semantic web.

Whereas Web 1.0 might be thought of as ‘brochure ware’, one-way communication, and Web 2.0 has come to mean interactive, two-way communication online, the future seems to be for information and knowledge management itself to move onto the web.

What is linked data?  Richard gave an excellent introduction to the topic, leading us through a logical path to understanding how information from different data sets can be shared, merged and used online.  When the web originated, it was about publishing text documents with links to other text documents, using html.  Linked data is about linking ‘things’ to other ‘things’, by giving them a label or identifier (a URI).  Things also have attributes, like a name, size, location, etc.

How about semantic search?   Victoria’s talk began from the opposite end of the spectrum – given that linked data exists on the web, how do you search for it?  Traditional online searching is based around keyword search, which uses methods such as counting words, page ranking using links, controlled form searching (eg; OPAC) or metadata.  These methods were developed for searching text.  To search structured data needs a different approach.

Linked data tools and open data publishing seems to have many potential benefits and also some risks; as with any rapid change the regulation and safeguards against the risks will probably lag behind what is taking place in practice.


Richard Wallis is a Technology Evangelist and has been with the UK’s leading Linked Data and Semantic Web technology company, Talis, for over eleven years. This coupled with his passion for and engagement with new and emerging technology trends, gives him a unique perspective of the issues challenging Information professionals today. As Technology Evangelist he is at the forefront in promoting, explaining, and applying new and emerging Web and Semantic Web technologies in the wider information domain. Richard is an active blogger and regular podcaster in the ‘Talking with Talis’ series.

Victoria Uren supported Richard.

Time and Venue

November 2011, 2pm The British Dental Association, 64 Wimpole Street, London W1G 8YS


No slides available




See our blog report: What next for the Web and information services? Linked data and semantic search

Study Suggestions


What next for the Web and information services? Linked data and semantic search

By Nicola Franklin

Where is the web going?  That was the question the speakers at the 2nd November 2011 NetIKX seminar were aiming to answer.  This join event, run with the Information for Energy Group (IFEG) and hosted at The Energy Institute, addressed the issue of linked data and the semantic web.

Whereas Web 1.0 might be thought of as ‘brochure ware’, one-way communication, and Web 2.0 has come to mean interactive, two-way communication online, the future seems to be for information and knowledge management itself to move onto the web.

The two speakers at this session described how this phenomenon is coming about, from two perspectives – Richard Wallis of Talis from the point of view of a producer or publisher of linked data onto the web, and Dr Victoria Uren of Aston University from the perspective of a researcher searching the semantic web.

What is linked data?  Richard gave an excellent introduction to the topic, leading us through a logical path to understanding how information from different data sets can be shared, merged and used online.  When the web originated, it was about publishing text documents with links to other text documents, using html.  Linked data is about linking ‘things’ to other ‘things’, by giving them a label or identifier (a URI).  Things also have attributes, like a name, size, location, etc.

The example Richard used was a spacecraft. 

A spacecraft is a ‘thing’ and can be given a label, such as:


To make sure this is a unique label, some more information might be added, for example:


To make sure people know it is your spacecraft you might add some extra information:

To store (publish) some information about this object on the internet you just at http://

When people start thinking about things, their attributes and how they link up together, they tend to think visually:


To transfer this into machine readable ‘computer speak’ the ovals are replaced by brackets:

<…/spacecraft/1969-059A>                    =  a thing

name Apollo 11 CSM                                       = an attribute and the value for that attribute

This language is called Resource Description Framework, or RDF for short.

Once common objects, or ‘things’, which are being  talked about by different people, in different locations, are identified by the same RDF label, then attributes or data about those things can be merged from those different sources – the data can be linked.

This can be very powerful.  For example, location data drawn from the Ordnance Survey can be linked with local authority data or central government data or NHS data.  This could answer questions like “how much was spent by this organisation in that area on this service, when this party was in power?”.

An example of linked data in action can be found on the BBC nature website.  This links together video archives from the BBC, information from Wikipedia, and information from other species or habitat-specific websites from various other organisations, displaying them all on one page.

Linked data can be used within an organisation, to publish data behind a firewall using intranet tools, which links together information from different business units, or held in different (perhaps incompatible) IT systems.  It can also be used to publish data externally to the internet, where other people and organisations can link it to their own data – either by using the same identifiers for ’things’ in common, or by mapping between their identifier and another one used for the same ‘thing’.

Some common standards are emerging, where ontologies or naming schemas are being published and adopted to ensure that different organisations use the same identifying labels to refer to the same ‘things’.  One example can be found at, which is the standard being jointly adopted by Google, Bing and Yahoo.

How about semantic search?   Victoria’s talk began from the opposite end of the spectrum – given that linked data exists on the web, how do you search for it?

Traditional online searching is based around keyword search, which uses methods such as counting words, page ranking using links, controlled form searching (eg; OPAC) or metadata.  These methods were developed for searching text.  To search structured data needs a different approach.

Victoria listed a range of query languages that have been developed but said that SPARQL, which was based upon SQL, was the most widely utilised.  As it isn’t reasonable to expect users to familiarise themselves with a query language like this in order to carry out a search, a more friendly user interface is needed.

Again a range of methods have been developed:

  • Keywords
  • Forms
  • Graph based
  • Question answering
  • Tabular browsing

Victoria described the pros and cons of each method:

Keyword searching is easy to use but is restricted to simple searches for ‘a thing’.  Forms are a familiar interface, and allow more complex searches than single keywords, but forms need to be predefined and are therefore restrictive.  Graph based searches give a visual representation of the data, but this is hard to do for anything more than one ontology (ie, data from one source).  Natural language question answering is easy for the user, and good for heterogeneous data sets, but requires some heavy duty computing power to avoid being very slow.  Tabular browsing, where you start with one keyword and are presented with a whole range of linked words to chose from to narrow the search, can be clumsy.

Victoria felt that semantic search is very good for corporate data management, where information is typically focused around one topic area and it is very useful to be able to bridge between different data silos.  She gave examples of Drupal7, Virtuoso and Talis as systems that can be used for this.

Following a coffee break Syndicate Groups were set up to discuss several questions:

  • What is the value to a business of using linked data and semantic search?
  • Who would use the stuff from our organisation?
  • What are our needs for corporate data management – what tools are needed?

I took part in one of the two tables discussing the first question.  We felt that linking silos of information could help more people to find the right information, more quickly, and also to discover information they previously didn’t know existed (and therefore wouldn’t search for).  This could lead to finding the people behind the information and strengthening relationships.  It could also increase efficiency, raise cross-fertilisation and improve innovation.

During the group feedback session and discussion that followed, the issue of the risks of open and linked data was brought up.  Could increased ease of access to some data, and the linking together of many pieces of data from different sources into one location, be misued?  One example given was of insurance companies, potentially refusing to insure someone for a life or healthcare policy who they’d discovered had an unhealthy lifestyle.  Another example could be terrorists making use of combined information from Ordnance Survey data + google maps + other data sets to plan atrocities.

Linked data tools and open data publishing seems to have many potential benefits and also some risks; as with any rapid change the regulation and safeguards against the risks will probably lag behind what is taking place in practice.

Using social media to achieve organisational goals – the next steps

Blog by Elisabeth Goodman

A shift from skepticism about, to evangelism for Social Media?

On 19th January, NetIKX hosted what proved to be a very successful seminar on this theme, with speakers Dr Hazel Hall1 and Nicky Whitsed2.   It was a follow-on seminar to one hosted the previous year, where we had introduced our members to a range of social media tools, and questioned if and how NetIKX might use them and also guide people in their use3.

Although our January 2010 seminar was also very popular, there was still some skepticism about the value of social media tools, and how organisations might use them.  This time, as Hazel commented to me in an aside at the end of the meeting, the tone was perhaps more one of how organisations might be persuaded to adopt the wider use of social media.

Social Media can be used by Library and Information Departments for a diverse range of purposes

Our speakers described the wide range of uses that social media tools can be put to, and their ability, beyond that of the previous tools available to us, to connect people as well as data and information.  We and our customers, can use social media tools for:

  • Collaborating on projects and for learning through wikis and ‘tweet-ups’
  • For staff development, teaching and training e.g. through ‘amplified events’ where someone present at an event will be sharing the content through Twitter with those who cannot attend.  Or by posting a recording of the event for others to access afterwards.  The Open University use Illuminate to run and record such events.
  • Providing virtual reference sources
  • Seeking feedback or peer review on planned presentations (which Hazel did for this presentation)
  • For gaining a better understanding of customer needs leading to new service developments

As Nicky pointed out, it’s important to understand the tools that our customers are using, and to be able to deliver services through those.  In fact her department has a ‘digilab’ where they have all the latest technology and social media tools, enabling their staff to become familiar with their use, and experiment with new ways of delivering their services.

The adoption of Social Media will be evolutionary, with some people leading the way

In the syndicate discussion groups that followed the presentations, delegates discussed the already visible evolutionary pathway in the adoption of social media by organisations.

Human Resources departments are using tools such as LinkedIn to learn about potential recruits.

Sales and Marketing teams are using Twitter and monitoring the web to find out and in some cases respond to what their customers are saying, monitor the competition and also influence the perception of their organisation.

Some companies are using tools such as Yammer internally to try out the use of such tools, or even to support the ‘crowd-sourcing’ of ideas in project management or general problem resolution4.

There needs to be a fine balance between policies and trust

It’s certain that organisations need some form of policy for the use of social media to address such issues as security and ethical behaviour.  Nicky shared details of sites such as http:/ that can help us with that.  However, policies need to allow sufficient scope so as not to discourage the use of social media.

Library and Information professionals could influence the policies within organisations, and even encourage the adoption of values or competencies within performance review frameworks that promote knowledge sharing through social media tools.

As we discussed in one of the syndicate groups, people are used to assessing and building trust through face-to-face interactions.  Social media users are now finding proxies for building that trust, for example by relying on the judgment of those whom they know already, seeing which postings are re-tweeted by others, reviewing the posting history of new people that they ‘meet’ online.

Increased adoption of social media by organisations will require a cultural change

Again, as put by one of the syndicate groups, we are operating in a ‘perpetual beta’ environment.  This is a shift for organisations that are used to making decisions on well-established software with a firm support infrastructure.

As Hazel put it, we also have a ‘youngster elders’ scenario, where people who are perhaps more used to leading and being the authority on subjects, need to be open to seeking guidance from the more knowledgeable younger generation (as some of us may already be doing at home!).

Hazel and Nicky described how Library and Information professionals can play a role in guiding and supporting the evolutionary adoption of social media tools by:

  • Demonstrating how the tools can be used
  • Experimenting and developing our own capabilities, as well as giving our users the opportunity to experiment
  • Providing training e.g. in digital literacy

Concluding thoughts

The use of social media tools in the organisation should be part of Library and Information Management strategy but they tend to be owned by Security.  We need to help organisations to switch from an emphasis on the risk of using social media, to the risk of not using these tools.


  1. Dr Hazel Hall is Director of the Centre for Social Informatics in the School of Computing at Edinburgh Napier University. She is also leads the implementation of the UK Library and Information Science Research Coalition. Hazel was named IWR Information Professional of the Year in December 2009.
  2. Nicky Whitsed is Director of Library Services at the Open University.  She is an experienced strategic and change manager having led successful projects in the commercial, medical and academic fields. Nicky is trained in project management and facilitation and also has experience as a trainer. She has served on a number of CILIP and JISC committees and on a number of editorial boards.
  3. Elisabeth Goodman and Suzanne Burge presented on ‘Social networking tools – should they be taken seriously’ in January 2010.  See Elisabeth’s presentation: “Using LinkedIn, blogs and Twitter for networking and communities of interest”
  4. See related blog by Matthew Loxton on crowd-sourcing
  5. Whilst writing this blog, several of the participants at the seminar also shared their accounts of the meeting.See for example the following:
  6. Elisabeth Goodman is the Programme Events Manager for NetIKX, and is also the Owner and Principal Consultant at RiverRhee Consulting, providing 1:1 guidance, training / workshops and support for enhancing team effectiveness through process improvement, knowledge and change management. She also provides 1:1 tutorials, seminars and workshops on the use of LinkedIn and other social media. Read Elisabeth Goodman’s blog for more discussions on topics covered by this blog.

January 2009 Seminar: Person to person communications on the Internet: a BBC experience


This meeting NetIKX was very pleased to welcome a well-known figure from our field, who has written a powerful book on the subject.


Euan Semple, Director and Author,

Time and Venue

January 2009, 2pm The British Dental Association, 64 Wimpole Street, London W1G 8YS


No slides available




Blog no longer available

Study Suggestion

Book: Organisations don’t tweet; people do.  Book — Euan Semple