NSW KM Forum Tuesday 23rd March 2010 Café – User Adoption of Enterprise Search

11 June, 2010

Topics for discussion:


Search Analytics & Tuning (basically improving the performance of search based in reporting)
Searching across multiple collections and/or media (sometimes called Federated Search & would probably include security issues)


User Adoption

  •  The “Google” experience – content?
  • Single search box
  • Corporate databases are not like the internet
  • Do users want a ranking experience cf content cleanup/cleansing
  • Don’t we just implement search engine and the users just use it
  • Adoption easier where product is better than what they had – manage user expectations
  • Meaningful results
  • 3rd and 4th attempts at implementing search – what went wrong?
  • TRUST lost in the search & database results if poor experience
  • Education of users re validity – information literacy
  • Search results to “lowest common denominator”


 Search analytics & tuning

  •  Purpose built databases
  • Monitor access and deselect content
  • Does not sell search
  • Not seen by end user but are important to search experience
  • Sold for internal intelligence – what people are doing?
  • Best bets, design info architecture
  • Typing in misspellings
  • Search answer to problems with data, brings it together
  • Time series
  • Context/content specific to have meaning
  • How do know the content is accessible?
  • Tail – cannot tune it themselves
  • SEO – web or enterprise context
  • Only analysing content with analytics
  • Used in isolation – should be used alongside other tools/studies
  • Are the right people involved in analytics?
  • Information architecture related
  • Understand users are searching for
  • “I want to get something quickly”
  • technical pre-requisites
  • Under-used
  • -spend money on web – the external presence
  • -what to do with the reports?
  • -guess why inputting search terms, why content returned
  • -what change in behaviour are you looking for with analytics?
  • – Not obvious to use analytics
  • -how to use the reports to reform/change info?
  • Do not assess limitations
  • Opportunity to make changes to content
  • Relationship with ser experience
  • Search logs, reports, metadata, survey
  • Relationship with social networking eg rate documents in a search result set
  • Trust, privacy issues
  • Evidence based
  • Is the content there? What are the mechanisms to improve/change content?
  • Evaluation  tool for content
  • Exposes flaws in content storage and repositories
  • Puts focus on content

 Federated search

  • Simultaneous searching of multiple databases
  • User has to know where data is – federated search overcomes this
  • Overlapping content
  • Information in databases is purpose built
  • Vs amalgamating data
  • Leave info where it belongs
  • Business Unit can determine which data to expose
  • Presentation of search results – where does data come from – sometimes confusing for user
  • Meet user expectations if user knows sources eg email, desktop, repository
  • Semantic web
  • Federated search can provide context as well as content
  • Utopia? Holy Grail of IM?
  • Ovecomes underlying complexity issues
  • Hyper-federated search (many to “all” sources) OR selected sources eg 405 Library collections
  • Structured and unstructured information and data
  • Contextual clusters (eg locality, concepts) vs isolated information and data points vs process-bound “trapped” info & data
  • Still need taxonomy
  • Cloud – out of sight, out of mind
  • Allows info/data to remain “in situ”
  • â$ Cheaper therefore á achievable faster adaptation and more sustainable
  • Fits SOA model: Search across data sources inc legacies
  • Risks : lifecycle of applications (eg ppt 95)
  • (digital preservation strategies) PDFG-A from databases, database archiving
  • Enterprise vs Web
  • Richness vs reach – refining + training search


  • Want to see more of it … why? – is very effective if people understand how metadata works
  • Need to rely on a combination of both structured and unstructured data
  • If you have both, users need to understand what they’re searching over – structured or unstructured… OR DO THEY?
  • METADATA is about retrieving so whatever gives the best result will e=get my vote
  • If people are asked to “rate for example” they need to know the implications of their ratings Therefore that will form the metadata
  • If there are confidentiality issues in an org, how does enterprise search deal with   firewalls
  • One org transferred to Sharepoint and got rid of all shared drives… then all docs created were done thru templates which already had metadata built in.
  • Orgs have been trying to get metadata right and have had a very hard time dealing with it
  • The more expansive the metadata is the better the search capability
  • Issue – no standard around metadata terms
  • Issue – people using old documents
  • Most systems have a very limited/restricted metadata  – not flexible or variable enough to give accurate metadata that’s organisation specific
  • The word “metadata”  – is an obstacle in itself.., “ever since I heard the word I’ve hated it”
  • There are auto-metadata systems now that will pull out “entities” to put into metadata (still evolving technology… effort vs results…)
  • Important to reduce the disconnect between the act of inputting the metadata and the value the user sees in it for future searching – awareness rather than training  and as much automated metadata as possible.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: