57
Semantic Search Peter Mika Yahoo! Research

Peter Mika's Presentation at SSSW 2011

Embed Size (px)

Citation preview

Page 1: Peter Mika's Presentation at SSSW 2011

Semantic SearchPeter Mika

Yahoo! Research

Page 2: Peter Mika's Presentation at SSSW 2011

- 2 -

Yahoo! serves over 680 million users in 25 countries

Page 3: Peter Mika's Presentation at SSSW 2011

- 3 -

Yahoo! Research: visit us at research.yahoo.com

Page 4: Peter Mika's Presentation at SSSW 2011

- 4 -

Yahoo! Research Barcelona

• Established January, 2006

• Led by Ricardo Baeza-Yates

• Research areas

– Web Mining

• content, structure, usage

– Social Media

– Distributed Web retrieval

– Geo information retrieval

– Semantic search

Page 5: Peter Mika's Presentation at SSSW 2011

- 5 -

Search is really fast, without necessarily being intelligent

Page 6: Peter Mika's Presentation at SSSW 2011

- 6 -

Why Semantic Search? Part I

• Improvements in IR are harder and harder to come by

– Machine learning using hundreds of features

• Text-based features for matching

• Graph-based features provide authority

– Heavy investment in computational power, e.g. real-time indexing and instant search

• Remaining challenges are not computational, but in modeling user cognition

– Need a deeper understanding of the query, the content and/or the world at large

– Could Watson explain why the answer is Toronto?

Page 7: Peter Mika's Presentation at SSSW 2011

- 7 -

Poorly solved information needs

• Multiple interpretations

– paris hilton

• Long tail queries

– george bush (and I mean the beer brewer in Arizona)

• Multimedia search

– paris hilton sexy

• Imprecise or overly precise searches

– jim hendler

– pictures of strong adventures people

• Searches for descriptions

– countries in africa

– 32 year old computer scientist living in barcelona

– reliable digital camera under 300 dollars

Many of these queries would not be asked by users, who learned over time what search technology can and can not do.

Many of these queries would not be asked by users, who learned over time what search technology can and can not do.

Page 8: Peter Mika's Presentation at SSSW 2011

- 8 -

Dealing with sparse collections

Note: don’t solve the sparsity problem where it doesn’t exist

Note: don’t solve the sparsity problem where it doesn’t exist

Page 9: Peter Mika's Presentation at SSSW 2011

- 9 -

Contextual Search: content-based recommendations

Hovering over an underlined phrase triggers a search for related news items.

Hovering over an underlined phrase triggers a search for related news items.

Page 10: Peter Mika's Presentation at SSSW 2011

- 10 -

Contextual Search: personalizationMachine Learning based ‘search’ algorithm selects the main story and the three alternate stories based on the users demographics (age, gender etc.) and previous behavior.

Machine Learning based ‘search’ algorithm selects the main story and the three alternate stories based on the users demographics (age, gender etc.) and previous behavior. Display advertizing is a

similar top-1 search problem on the collection of advertisements.

Display advertizing is a similar top-1 search problem on the collection of advertisements.

Page 11: Peter Mika's Presentation at SSSW 2011

- 11 -

Contextual Search: new devicesShow related contentShow related content

Connect to friends watching the sameConnect to friends watching the same

Page 12: Peter Mika's Presentation at SSSW 2011

- 12 -

Aggregation across different dimensions

Hyperlocal: showing content from across Yahoo that is relevant to a particular neighbourhood.

Hyperlocal: showing content from across Yahoo that is relevant to a particular neighbourhood.

Page 13: Peter Mika's Presentation at SSSW 2011

- 13 -

Why Semantic Search? Part II

• The Semantic Web is now a reality– Billions of triples

– Thousands of schemas

– Varying quality

• End users– Keyword queries, not SPARQL

• Searching data instead or in addition to searching documents– Providing direct answers (possibly with explanations)

– Support for novel search tasks

Page 14: Peter Mika's Presentation at SSSW 2011

- 14 -

Information box with content from and links to Yahoo! Travel

Information box with content from and links to Yahoo! Travel

Direct answers in search

Points of interest in Vienna, Austria

Points of interest in Vienna, Austria

Since Aug, 2010, ‘regular’ search results are ‘Powered by Bing’

Since Aug, 2010, ‘regular’ search results are ‘Powered by Bing’

Products from Yahoo! Shopping

Products from Yahoo! Shopping

Page 15: Peter Mika's Presentation at SSSW 2011

- 15 -

Novel search tasks

• Aggregation of search results

– e.g. price comparison across websites

• Analysis and prediction

– e.g. world temperature by 2020

• Semantic profiling

– recommendations based on particular interests

• Semantic log analysis

– understanding user behavior in terms of objects

• Support for complex tasks (search apps)

– e.g. booking a vacation using a combination of services

Page 16: Peter Mika's Presentation at SSSW 2011

- 16 -

Why Semantic Search? Part III

• There is a business model

– Publishers are (increasingly) interested in making their content searchable, linkable and easier to aggregate and reuse

– So that social media sites and search engines can expose their content to the right users, in a rich and attractive form

• This is a about creating an ecosystem…

– More advanced in some domains

– In others, we still live in the tyranny of rate-limited APIs, licenses etc.

Page 17: Peter Mika's Presentation at SSSW 2011

- 17 -

This is not a business model

http://en.wikipedia.org/wiki/Underpants_Gnomes

Page 18: Peter Mika's Presentation at SSSW 2011

- 19 -

Example: Facebook’s Like and the Open Graph Protocol

• The ‘Like’ button provides publishers with a way to promote their content on Facebook and build communities

– Shows up in profiles and news feed

– Site owners can later reach users who have liked an object

– Facebook Graph API allows 3rd party developers to access the data

• Open Graph Protocol is an RDFa-based format that allows to describe the object that the user ‘Likes’

Page 19: Peter Mika's Presentation at SSSW 2011

- 20 -

Example: Facebook’s Open Graph Protocol

• RDF vocabulary to be used in conjunction with RDFa

– Simplify the work of developers by restricting the freedom in RDFa

• Activities, Businesses, Groups, Organizations, People, Places, Products and Entertainment

• Only HTML <head> accepted

• http://opengraphprotocol.org/

<html xmlns:og="http://opengraphprotocol.org/schema/"> <head>

<title>The Rock (1996)</title> <meta property="og:title" content="The Rock" /> <meta property="og:type" content="movie" /> <meta property="og:url" content="http://www.imdb.com/title/tt0117500/" /> <meta property="og:image" content="http://ia.media-imdb.com/images/rock.jpg" /> …

</head> ...

Page 20: Peter Mika's Presentation at SSSW 2011

Semantic Search

Page 21: Peter Mika's Presentation at SSSW 2011

- 22 -

Semantic Search: a definition

• Semantic search is a retrieval paradigm that– Makes use of the structure of the data or explicit schemas to

understand user intent and the meaning of content– Exploits this understanding at some part of the search process

• Combination of document and data retrieval– Documents with metadata

• Metadata may be embedded inside the document• I’m looking for documents that mention countries in Africa.

– Data retrieval• Structured data, but searchable text fields• I’m looking for directors, who have directed movies where the synopsis

mentions dinosaurs.

• Web search vs. vertical/enterprise/desktop search• Related fields:

– XML retrieval– Keyword search in databases– Natural Language Retrieval

Page 22: Peter Mika's Presentation at SSSW 2011

- 23 -

Semantics at every step of the IR process

bla bla bla?

bla

blabla

q=“bla” * 3

Document processing bla

blabla

blabla

bla

IndexingRanking

“bla”θ(q,d)

Query interpretation

Result presentation

The IR engine The Web

Page 23: Peter Mika's Presentation at SSSW 2011

Data on the Web

Page 24: Peter Mika's Presentation at SSSW 2011

- 25 -

Data on the Web

• Most web pages on the Web are generated from structured data

– Data is stored in relational databases (typically)

– Queried through web forms

– Presented as tables or simply as unstructured text

• The structure and semantics (meaning) of the data is not directly accessible to search engines

• Two solutions

– Extraction using Information Extraction (IE) techniques (implicit metadata)

• Supervised vs. unsupervised methods

– Relying on publishers to expose structured data using standard Semantic Web formats (explicit metadata)

• Particularly interesting for long tail content

Page 25: Peter Mika's Presentation at SSSW 2011

- 26 -

Information Extraction methods

• Natural Language Processing

• Extraction of triples

– Suchanek et al. YAGO: A Core of Semantic Knowledge Unifying WordNet and Wikipedia, WWW, 2007.

– Wu and Weld. Autonomously Semantifying Wikipedia, CIKM 2007.

• Filling web forms automatically (form-filling)

– Madhavan et al. Google's Deep-Web Crawl. VLDB 2008

• Extraction from HTML tables

– Cafarella et al. WebTables: Exploring the Power of Tables on the Web. VLDB 2008

• Wrapper induction

– Kushmerick et al. Wrapper Induction for Information ExtractionText extraction. IJCAI 2007

Page 26: Peter Mika's Presentation at SSSW 2011

- 27 -

Semantic Web

• Sharing data across the Web– Publish information in standard formats (RDF, RDFa)

– Share the meaning using powerful, logic-based languages (OWL, RIF)

– Query using standard languages and protocols (HTTP, SPARQL)

• Two main forms of publishing

– Linked Data

• Data published as RDF documents linked to other RDF documents and/or using SPARQL end-points

• Community effort to re-publish large public datasets (e.g. Dbpedia, open government data)

– RDFa

• Data embedded inside HTML pages

• Recommended for site owners by Yahoo, Google, Facebook

Page 27: Peter Mika's Presentation at SSSW 2011

- 28 -

Resource Description Framework (RDF)

• Each resource (thing, entity) is identified by a URI

– Globally unique identifiers

– Locators of information

• Data is broken down into individual facts

– Triples of (subject, predicate, object)

• A set of triples (an RDF graph) is published together in an RDF document

example:roi

“Roi Blanco”

name

typefoaf:Person

RDF document

Page 28: Peter Mika's Presentation at SSSW 2011

- 29 -

Linked Data: interlinked RDF documents

example:roi

“Roi Blanco”

namefoaf:Person

sameAs

example:roi2worksWith

example:peter

[email protected]

email

type

type

Roi’s homepage

Yahoo

Friend-of-a-Friend ontology

Page 29: Peter Mika's Presentation at SSSW 2011

- 30 -

<p typeof=”foaf:Person" about="http://example.org/roi">

<span property=”foaf:name”>Roi Blanco</span>.

<a rel=”owl:sameAs" href="http://research.yahoo.com/roi"> Roi Blanco </a>.

You can contact him at <a rel=”foaf:mbox" href="mailto:[email protected]">

via email </a>.

</p>

...

RDFa: metadata embedded in HTML

Roi’s homepage

Page 30: Peter Mika's Presentation at SSSW 2011

- 31 -

Crawling the Semantic Web

• Linked Data

– Similar to HTML crawling, but the the crawler needs to parse RDF/XML (and others) to extract URIs to be crawled

– Semantic Sitemap/VOID descriptions

• RDFa

– Same as HTML crawling, but data is extracted after crawling

– Mika et al. Investigating the Semantic Gap through Query Log Analysis, ISWC 2010.

• SPARQL endpoints

– Endpoints are not linked, need to be discovered by other means

– Semantic Sitemap/VOID descriptions

Page 31: Peter Mika's Presentation at SSSW 2011

- 32 -

Data fusion

• Ontology alignment/matching

– Widely studied in Semantic Web research

• Entity resolution

– Record linkage in databases

• Machine learning based approaches, focusing on attributes

– Graph-based approaches

• e.g. the work of Lisa Getoor are applicable to RDF data

• Improvements over only attribute based matching

• Blending

– Merging objects that represent the same real world entity and reconciling information from multiple sources

Page 32: Peter Mika's Presentation at SSSW 2011

- 33 -

Data quality

• Data quality is a serious issue

– Quality ranges from professional data to user-generated content

– Mistakes in extraction or markup

– Misuse of the ontology language (e.g. owl:sameAs)

– Trust issues

• Can we trust Dbpedia.org to represent Wikipedia?

• Quality assessment and data curation

– Automated data validation

• Against known-good data or using triangulation

• Validation against the ontology or using probabilistic models

– Manual validation by trained professionals or using crowd-sourcing

• Sampling data for evaluation

– Curation based on user feedback

Page 33: Peter Mika's Presentation at SSSW 2011

- 34 -

The role of ontologies in semantic search

• Semantic Web data is extremely heterogeneous

– Billion Triples Challenge dataset has 33,000 unique properties

• Any search engine has a partial understanding of this data

– A search engine may ‘support’ a handful of ontologies

– The rest of the data is just a labeled graph

• Reducing fragmentation in ontologies is critical for search

– Decentralized ontology development doesn’t converge

– schema.org is a step toward reducing the fragmentation

• An agreement on the vocabularies supported by Bing/Google/Yahoo

• Open research question whether reasoning is even useful to improve search

– Mixed results in IR with query- and document expansion

Page 34: Peter Mika's Presentation at SSSW 2011

Query Interpretation

Page 35: Peter Mika's Presentation at SSSW 2011

- 36 -

Query Interpretation in Information Retrieval

• Provide a higher level representation of queries in some conceptual space

– Ideally, the same space in which documents are represented

– Limited user involvement in traditional search engines

• Examples: search assist, facets

• Interpretation happens before the query is executed

– Federation: determine where to send the query

– Ranking feature

– Deals with the sparseness of query streams

• 88% of unique queries are singleton queries (Baeza et al)

• Spell correction (“we have also included results for…”), stemming

Page 36: Peter Mika's Presentation at SSSW 2011

- 37 -

Query interpretation in Semantic Search

• Web search users will not formulate structured queries

– Unable to formulate queries in SPARQL or DL

– No knowledge of the schema of the data

– No single ontology

• Billion Triples Challenge dataset contains 33,000 properties

• “Snap to grid”

– Using the ontologies, a summary of the data or the whole data we find the most likely structured query matching the user input

• Example: “starbucks nyc” -> company name:”Starbucks” in location:”New York City”

• Larger user involvement

– Guiding the user in constructing the query

• Example: Freebase Suggest

– Displaying back the interpretation of the query

• Example: TrueKnowledge

Page 37: Peter Mika's Presentation at SSSW 2011

Indexing and Ranking

Page 38: Peter Mika's Presentation at SSSW 2011

- 39 -

Indexing

• Search requires matching and ranking

– Matching selects a subset of the elements to be scored

• The goal of indexing is to speed up matching

– Retrieval needs to be performed in milliseconds

– Without an index, retrieval would require streaming through the collection

• The type of index depends on the query language to support

– SPARQL is a highly-expressive SQL-like query language for experts

• DB-style indexing

– End-users are accustomed to keyword queries with very limited structure (see Pound et al. WWW2010)

• IR-style indexing

Page 39: Peter Mika's Presentation at SSSW 2011

- 40 -

DB-style indexing

• Typical indexing in RDF triple stores

– B+ trees to support the basic graph patterns

• (s, p, o?), (?s, p, o), etc.

– Query planning for efficient join processing

• No ranking

• Limited support to restrict results by keyword matching

– SPARQL regular expression matching

• { ?x dc:title ?title FILTER regex(?title, "web", "i" ) }

• Implemented as a filtering of results

– Magic predicates

• Non-standard SPARQL extensions for full-text search

• SELECT ?label WHERE { <abstract:> fts:exactMatch ?label . }

• Implemented using the full-text search of the underlying DBMS or using a side-index (e.g. Lucene)

Page 40: Peter Mika's Presentation at SSSW 2011

- 41 -

IR-style indexing

• Index data as text

– Create virtual documents from data

– One virtual document per subgraph, resource or triple

• typically: resource

• Key differences to Text Retrieval

– RDF data is structured

– Minimally, queries on property values are required

• No join processing or limited join processing

– e.g. tree-shaped queries

Page 41: Peter Mika's Presentation at SSSW 2011

- 44 -

Ranking methods

• The unit of retrieval is either an object or a sub-graph

– The sub-graph induced by matching keywords in the query

• Ranking methods from Information Retrieval

– TF-IDF, BM25F, language models

– Methods such as BM25F allow weighting by predicate

• Machine learning is used to tune parameters and incorporate query-independent (static) features

– Example: authority scores of datasets computed using PageRank (Harth et al. ISWC 2009)

• Additional topics

– Relevance feedback and personalization

– Click-based ranking

– De-duplication

– Diversification

Page 42: Peter Mika's Presentation at SSSW 2011

EvaluationHarry Halpin, Daniel Herzig, Peter Mika, Jeff Pound, Henry Thompson, Roi Blanco, Thanh Tran Duc

Page 43: Peter Mika's Presentation at SSSW 2011

- 46 -

Semantic Search evaluation

• Started at SemSearch 2010

• Two tasks– Entity Search

• Queries where the user is looking for a single real world object

• Pound et al. Ad-hoc Object Retrieval in the Web of Data, WWW 2010.

– List search (new in 2011)• Queries where the user is looking for a class of objects

• Billion Triples Challenge 2009 dataset

• Evaluated using Amazon’s Mechanical Turk– See Halpin et al. Evaluating Ad-Hoc Object Retrieval, IWEST

2010

• Prizes sponsored by Yahoo! Labs (2011)

Page 44: Peter Mika's Presentation at SSSW 2011

- 47 -

Hosted by Yahoo! Labs at semsearch.yahoo.com

Page 45: Peter Mika's Presentation at SSSW 2011

- 48 -

Assessment with Amazon Mechanical Turk

– Evaluation using non-expert judges

• Paid $0.2 per 12 results

– Typically done in 1-2 minutes

– $6-$12 an hour

– Sponsored by the European SEALS project

• Each result is evaluated by 5 workers

– Workers are free to choose how many tasks they do

• Makes agreement difficult to compute

Number of tasks completed per worker (2010)

Page 46: Peter Mika's Presentation at SSSW 2011

- 49 -

Evaluation form

Page 47: Peter Mika's Presentation at SSSW 2011

- 50 -

Evaluation form

Page 48: Peter Mika's Presentation at SSSW 2011

- 52 -

Lessons learned

• Finding complex queries is not easy

– Query logs from web search engines contain simple queries

– Semantic Web search engines are not used by the general public

• Computing agreement is difficult

– Each judge evaluates a different number of items

• Result rendering is critically important

• The Semantic Web is not necessarily all DBpedia

– sub30-RES.3 40.6% DBpedia

– sub31-run3 93.8% Dbpedia

• Follow up experiments validated the Mechanical Turk approach

– Blanco et al. Repeatable and Reliable Search System Evaluation using Crowd-Sourcing, SIGIR2011

Page 49: Peter Mika's Presentation at SSSW 2011

- 53 -

Related work

• TREC Entity Track

– Document retrieval + extraction

• Question Answering over Linked Data

– Translation of natural language questions to SPARQL

• SEALS campaigns

– Interactive search

• Join the discussion at [email protected]

Page 50: Peter Mika's Presentation at SSSW 2011

Search interface

Page 51: Peter Mika's Presentation at SSSW 2011

- 55 -

Search Interface

• Semantic Search brings improvements in

– Snippet generation

– Navigation of the domain

– Adaptive and interactive presentation

• Presentation adapts to the kind of query and results presented

• Object results can be actionable, e.g. buy this product

• User can provide feedback on particular objects or data sources

– Aggregated search

• Grouping similar items, summarizing results in various ways

• Filtering (facets), possibly across different dimensions

– Query and task templates

• Help the user to fulfill the task by placing the query in a task context

Page 52: Peter Mika's Presentation at SSSW 2011

- 56 -

Snippet generation using metadata

• Yahoo displays enriched search results for pages that contain microformat or RDFa markup using recognized ontologies

– Displaying data, images, video

– Example: GoodRelations for products

– Enhanced results also appear for sites from which we extract information ourselves

• Also used for generating facets that can be used to restrict search results by object type

– Example: “Shopping sites” facet for products

• Documentation and validator for developers

– http://developer.search.yahoo.com

• Formerly: SearchMonkey allowed developers to customize the result presentation and create new ones for any object type

• Haas et al. Enhanced Results in Web Search. SIGIR 2011

Page 53: Peter Mika's Presentation at SSSW 2011

- 57 -

Example: Yahoo! Enhanced Results

Enhanced result with deep links, rating, address.

Page 54: Peter Mika's Presentation at SSSW 2011

- 58 -

Example: Yahoo! Vertical Intent Search

Related actors and movies

Page 55: Peter Mika's Presentation at SSSW 2011

- 59 -

Example: Sig.ma Semantic Information Mashup (DERI)

Page 56: Peter Mika's Presentation at SSSW 2011

- 60 -

Future work in Semantic Web Search

• Information Extraction for semantic search

• Data quality assessment

• Reasoning for search

• Efficient processing of hybrid queries

• Data display and visualization

• Mobile search

• Semantic advertizing

• Search log and website traffic analysis

Page 57: Peter Mika's Presentation at SSSW 2011

- 61 -

The End

• Many thanks for contributions by members of the SemSearch group at Yahoo! Research in Barcelona

• Contact

[email protected]

– Internships available for PhD students (deadline in January)