Integration of Heterogeneous Databases without Common Domains Using Queries
Based on Textual Similarity:
William W. Cohen
Machine Learning Dept. and Language Technologies Inst.School of Computer ScienceCarnegie Mellon University
Embodied Cognition and Knowledge
What was that paper, and who is this guy talking?
Representation languages:
DBs, KR
Human languages:
NLP, IR
Machine Learning
WHIRLWord-Based HeterogeneousInformation Representation
Language
History
• 1982/1984: Ehud Shapiro’s thesis:
– MIS: Learning logic programs as debugging an empty Prolog program
– Thesis contained 17 figures and a 25-page appendix that were a full implementation of MIS in Prolog
– Incredibly elegant work
• “Computer science has a great advantage over other experimental sciences: the world we investigate is, to a large extent, our own creation, and we are the ones to determine if it is simple or messy.”
82
84
86
88
90
92
94
96
98
00
04
08
13
18
History
• Grad school in AI at Rutgers• MTS at AT&T Bell Labs in group
doing KR, DB, learning, information retrieval, …
• My work: learning logical (description-logic-like, Prolog-like, rule-based) representations that model large noisy real-world datasets.
82
84
86
88
90
92
94
96
98
00
04
08
13
18
History
• AT&T Bells Labs becomes AT&T Labs Research
• The web takes off – as predicted by Vinge and Gibson
• IR folks start looking at retrieval and question-answering with the Web
• Alon Halevy starts the Information Manifold project to integrate data on the web– VLDB 2006 10-year Best Paper
Award for 1996 paper on IM• I started thinking about the same
problem in a different way….
82
84
86
88
90
92
94
96
98
00
04
08
13
18
History: WHIRL motivation 1
• As the world of computer science gets richer and more complex, computer science can no longer limit itself to studying “our own creation”.
• Tension exists between – Elegant theories of
representation
– The not-so-elegant real world that is being represented
82
84
86
88
90
92
94
96
98
00
04
08
13
18
CA
History: WHIRL motivation 1
• The beauty of the real world is its complexity….82
84
86
88
90
92
94
96
98
00
04
08
13
18
History: integration by mediation
82
84
86
88
90
92
94
96
98
00
04
08
13
18
QuickTime™ and a decompressor
are needed to see this picture.
• Mediator translates between the knowledge in multiple separate KBs
• Each KB is a separate “symbol system”– No formal connection
between them except via the mediator
WHIRL idea: exploit linguistic properties of the HTML
“veneer” of web-accessible DBs 82
84
86
88
90
92
94
96
98
00
04
08
13
18
QuickTime™ and a decompressor
are needed to see this picture.
TFIDF similarity
WHIRL Motivation 2: Web KBs are embodied
Link items asneeded by Q
Query Q
SELECT R.a,S.a,S.b,T.b FROM R,S,T
WHERE R.a=S.a and S.b=T.b
R.a S.a S.b T.b
Anhai Anhai Doan Doan
Dan Dan Weld Weld
Strongest links: those agreeable to most users
William Will Cohen Cohn
Steve Steven Minton Mitton
Weaker links: those agreeable to some users
William David Cohen Cohneven weaker links…
Link items asneeded by Q
WHIRL approach:
Query Q
SELECT R.a,S.a,S.b,T.b FROM R,S,T
WHERE R.a~S.a and S.b~T.b (~ TFIDF-similar)
R.a S.a S.b T.b
Anhai Anhai Doan Doan
Dan Dan Weld Weld
Incrementally produce a ranked list of possible links,
with “best matches” first. User (or downstream process)
decides how much of the list to generate and examine.
William Will Cohen Cohn
Steve Steven Minton Mitton
William David Cohen Cohn
QuickTime™ and a decompressor
are needed to see this picture.
WHIRL queries
• Assume two relations:review(movieTitle,reviewText): archive of reviews
listing(theatre, movieTitle, showTimes, …): now showing
The Hitchhiker’s Guide to the Galaxy, 2005
This is a faithful re-creation of the original radio series – not surprisingly, as Adams wrote the screenplay ….
Men in Black, 1997
Will Smith does an excellent job in this …
Space Balls, 1987
Only a die-hard Mel Brooks fan could claim to enjoy …
… …
Star Wars Episode III
The Senator Theater
1:00, 4:15, & 7:30pm.
Cinderella Man
The Rotunda Cinema
1:00, 4:30, & 7:30pm.
… … …
WHIRL queries
• “Find reviews of sci-fi comedies [movie domain]
FROM review SELECT * WHERE r.text~’sci fi comedy’
(like standard ranked retrieval of “sci-fi comedy”)
• “ “Where is [that sci-fi comedy] playing?”FROM review as r, LISTING as s, SELECT *
WHERE r.title~s.title and r.text~’sci fi comedy’
(best answers: titles are similar to each other – e.g., “Hitchhiker’s Guide to the Galaxy” and “The Hitchhiker’s Guide to the Galaxy, 2005” and the review text is similar to “sci-fi comedy”)
WHIRL queries• Similarity is based on TFIDF rare words are most important.
• Search for high-ranking answers uses inverted indices….
The Hitchhiker’s Guide to the Galaxy, 2005
Men in Black, 1997
Space Balls, 1987
…
Star Wars Episode III
Hitchhiker’s Guide to the Galaxy
Cinderella Man
…
Years are common in the review archive, so have low weight
hitchhiker movie00137
the movie001,movie003,movie007,movie008, movie013,movie018,movie023,movie0031,
…..
- It is easy to find the (few) items that match on “important” terms
- Search for strong matches can prune “unimportant terms”
After WHIRL
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• Efficient text joins
• On-the-fly, best-effort, imprecise integration
• Interactions between information extraction quality and results of queries on extracted data
• Keyword search on databases
• Use of statistics on text corpora to build intelligent “embodied” systems
• Turney: solving SAT analogies with PMI over word pairs
• Mitchell & Just: predicting FMI brain images resulting from reading a common noun (“hammer”) from co-occurrence information between nouns and verbs
Recent work: non-textual similarity
82
84
86
88
90
92
94
96
98
00
04
08
13
18
“William W. Cohen, CMU”
“Dr. W. W. Cohen”
cohenwilliam w
drcmu
“George W. Bush”
“George H. W. Bush”
“Christos Faloutsos, CMU”
Recent Work
• Personalized PageRank aka Random Walk with Restart:– Similarity measure for nodes in a graph, analogous
to TFIDF for text in a WHIRL database– natural extension to PageRank– amenable to learning parameters of the walk
(gradient search, w/ various optimization metrics):
• Toutanova, Manning & NG, ICML2004; Nie et al, WWW2005; Xi et al, SIGIR 2005
– various speedup techniques exist– queries:
Given type t* and node x, find y:T(y)=t* and y~x
82
84
86
88
90
92
94
96
98
00
04
08
13
18
proposal
CMU
CALO
graph
William
6/18/07
6/17/07
Sent To
Term In Subject
Learning to Search Email
[SIGIR 2006, CEAS 2006, WebKDD/SNA 2007]
Einat Minkov, CMU; Andrew Ng, Stanford
Tasks that are like similarity queries
Person namePerson namedisambiguationdisambiguation
ThreadingThreading
Alias findingAlias finding
[ term “andy” file msgId ]
“person”
[ file msgId ]
“file”
What are the adjacent messages in this thread?
A proxy for finding “more messages like this one”
What are the email-addresses of Jason ?...
[ term Jason ]
“email-address”
Meeting Meeting attendees finderattendees finder
Which email-addresses (persons) should I notify about this meeting? [ meeting mtgId ]
“email-address”
Results on one task
0%
20%
40%
60%
80%
100%
1 2 3 4 5 6 7 8 9 10
Rank
Recall
Mgmt. game
PER
SO
N
NA
ME
DIS
AM
BIG
UA
TIO
N
Results on several tasks (MAP)
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
M.game sager Shapiro
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
M.game Farmer Germany
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
Meetings
Namedisambiguation
Threading
Alias finding
*
*
*
*
*
*
*
*
** *
+
+
+
+ +
*
Set Expansion using the Web
• Fetcher: download web pages from the Web• Extractor: learn wrappers from web pages• Ranker: rank entities extracted by wrappers
1. Canon2. Nikon3. Olympus
4. Pentax5. Sony6. Kodak7. Minolta8. Panasonic9. Casio10. Leica11. Fuji12. Samsung13. …
Richard Wang, CMU
The Extractor
• Learn wrappers from web documents and seeds on the fly– Utilize semi-structured documents– Wrappers defined at character level
• No tokenization required; thus language independent
• However, very specific; thus page-dependent– Wrappers derived from document d is applied to d only
<li class=" acura"><a href="http://www. curryauto .com/" >
<li class=" nissan"><a href="http://www. curryauto.com/" >
<li class=" ford"><a href="http://www.curry auto.com/" > <img src="/common/logos/ ford/logo-horiz-rgb-lg-dkbg.gif" alt="3"></a> <ul><li class="last"><a href="http://www.curry auto.com/"> <span class="dName">Curry Ford</span>...</li></ul > </li>
<img src="/curryautogroup/images/logo -horiz-rgb-lg-dkbg.gif" alt="5"></a> <ul><li class="last"><a href="http://www.curryacura.com/" > <span class="dName">Curry Acura</span>...</li></ul> </li>
<img src="/common/logos/ nissan/logo-horiz-rgb-lg-dkbg.gif" alt="6"></a> <ul><li class="last"><a href= "http://www.geisau to.com/ "> <span class="dName">Curry Nissan </span>...</li></ul> </li>
Ranking Extractions
• A graph consists of a fixed set of…– Node Types: {seeds, document, wrapper, mention}– Labeled Directed Edges: {find, derive, extract}
• Each edge asserts that a binary relation r holds• Each edge has an inverse relation r-1 (graph is cyclic)
“ford”, “nissan”, “toyota”
curryauto.com
Wrapper #3
Wrapper #2
Wrapper #1
Wrapper #4
“honda”26.1%
“acura”34.6%
“chevrolet”22.5%
“bmw pittsburgh”8.4%
“volvo chicago”8.4%
find
derive
extract northpointcars.com
Minkov et al. Contextual Search and Name Disambiguation in Email using Graphs. SIGIR 2006
Evaluation Method• Mean Average Precision
– Commonly used for evaluating ranked lists in IR– Contains recall and precision-oriented aspects– Sensitive to the entire ranking– Mean of average precisions for each ranked list
• Evaluation: Average over 36 datasets in three languages (Chinese, Japanese, English)
1. Average over several 2- or 3-seed queries for each dataset.
2. MAP performance: high 80s - mid 90s3. Google Sets: MAP in 40s, only English
where L = ranked list of extracted mentions, r = rank
Prec(r) = precision at rank r
(a) Extracted mention at r matches any true mention
(b) There exist no other extracted mention at rank less than r that is of the same entity as the one at r
⎩⎨⎧
=
otherwise
trueare (b) and (a) if
0
1
)(NewEntity r
# True Entities = total number of true entities in this dataset
Evaluation Datasets
Top three mentions are the seedsTry it out at http://rcwang.com/seal
Relational Set Expansion Seeds
Future?
82
84
86
88
90
92
94
96
98
00
04
08
13
18
Representation languages:
DBs, KR
Human languages:
NLP, IR
Machine Learning
??