Transfer-based MT with Strong Decoding
for a Miserly Data Scenario
Alon LavieLanguage Technologies Institute
Carnegie Mellon University
Joint work with: Stephan Vogel, Kathrin Probst, Erik Peterson, Ari Font-Llitjos, Lori Levin, Rachel Reynolds, Jaime
Carbonell, Richard Cohen
July 21, 2003 TIDES MT Evaluation Workshop
2
Rationale and Motivation
• Our Transfer-based MT approach is specifically designed for limited-data scenarios
• Hindi SLE was first open-domain large-scale test for our system, but… Hindi turned out to be not a limited-data scenario– 1.5 Million words of parallel text
• Lessons Learned by end of SLE– Basic XFER system did not have a strong decoder– “noisy” statistical lexical resources interfere with
transfer-rules in our basic XFER system
July 21, 2003 TIDES MT Evaluation Workshop
3
Rationale and Motivation
Research Questions:• How would we do in a more “realistic” minority
language scenario, with very limited resources? How does XFER compare with EBMT and SMT under such a scenario?
• How well can we do when we add a strong decoder to our XFER system?
• What is the effect of Multi-Engine combination when using a strong decoder?
July 21, 2003 TIDES MT Evaluation Workshop
4
A Limited Data Scenario for Hindi-to-English
• Put together a scenario with “miserly” data resources:– Elicited Data corpus: 17589 phrases– Cleaned portion (top 12%) of LDC dictionary: ~2725
Hindi words (23612 translation pairs)– Manually acquired resources during the SLE:
• 500 manual bigram translations• 72 manually written phrase transfer rules• 105 manually written postposition rules• 48 manually written time expression rules
• No additional parallel text!!
July 21, 2003 TIDES MT Evaluation Workshop
5
Learning Transfer-Rules from Elicited Data
• Rationale:– Large bilingual corpora not available– Bilingual native informant(s) can translate and word
align a well-designed elicitation corpus, using our elicitation tool
– Controlled Elicitation Corpus designed to be typologically comprehensive and compositional
– Significantly enhance the elicitation corpus using a new technique for extracting appropriate data from an uncontrolled corpus
– Transfer-rule engine and learning approach support acquisition of generalized transfer-rules from the data
July 21, 2003 TIDES MT Evaluation Workshop
6
The CMU Elicitation Tool
July 21, 2003 TIDES MT Evaluation Workshop
7
Elicited Data Collection• Goal: Acquire high quality word aligned Hindi-
English data to support system development, especially grammar development and automatic grammar learning
• Recruited team of ~20 bilingual speakers• Extracted a corpus of phrases (NPs and PPs) from
Brown Corpus section of Penn TreeBank• Extracted corpus divided into files and assigned to
translators, here and in India• Controlled Elicitation Corpus also translated into
Hindi• Resulting in total of 17589 word aligned translated
phrases
July 21, 2003 TIDES MT Evaluation Workshop
8
XFER System Architecture
User
Learning Module
ElicitationProcess
SVSLearning Process
TransferRules
Run-Time Module
SLInputSL
Parser
TransferEngine
TLGenerator
DecoderModule
TLOutput
July 21, 2003 TIDES MT Evaluation Workshop
9
The Transfer EngineAnalysis
Source text is parsed into its grammatical structure. Determines transfer application ordering.
Example:
他 看 书。 (he read book)
S
NP VP
N V NP
他 看 书
TransferA target language tree is created by reordering, insertion, and deletion.
S
NP VP
N V NP
he read DET N
a book
Article “a” is inserted into object NP. Source words translated with transfer lexicon.
GenerationTarget language constraints are checked and final translation produced.
E.g. “reads” is chosen over “read” to agree with “he”.
Final translation:
“He reads a book”
July 21, 2003 TIDES MT Evaluation Workshop
10
Transfer Rule Formalism
Type informationPart-of-speech/constituent
informationAlignments
x-side constraints
y-side constraints
xy-constraints, e.g. ((Y1 AGR) = (X1 AGR))
;SL: the man, TL: der Mann
NP::NP [DET N] -> [DET N]((X1::Y1)(X2::Y2)
((X1 AGR) = *3-SING)((X1 DEF = *DEF)((X2 AGR) = *3-SING)((X2 COUNT) = +)
((Y1 AGR) = *3-SING)((Y1 DEF) = *DEF)((Y2 AGR) = *3-SING)((Y2 GENDER) = (Y1 GENDER)))
July 21, 2003 TIDES MT Evaluation Workshop
11
Example Transfer Rule;; PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT VERB;; passive of 43 (7b){VP,28}VP::VP : [V V V] -> [Aux V]( (X1::Y2) ((x1 form) = root) ((x2 type) =c light) ((x2 form) = part) ((x2 aspect) = perf) ((x3 lexwx) = 'jAnA') ((x3 form) = part) ((x3 aspect) = perf) (x0 = x1) ((y1 lex) = be) ((y1 tense) = past) ((y1 agr num) = (x3 agr num)) ((y1 agr pers) = (x3 agr pers)) ((y2 form) = part))
July 21, 2003 TIDES MT Evaluation Workshop
12
Rule Learning - Overview
• Goal: Acquire Syntactic Transfer Rules• Use available knowledge from the source
side (grammatical structure)• Three steps:
1. Flat Seed Generation: first guesses at transfer rules; no syntactic structure
2. Compositionality: use previously learned rules to add structure
3. Seeded Version Space Learning: refine rules by generalizing with validation (learn appropriate feature constraints)
July 21, 2003 TIDES MT Evaluation Workshop
13
Examples of Learned Rules (I){NP,14244}
;;Score:0.0429
NP::NP [N] -> [DET N]
(
(X1::Y2)
)
{NP,14434}
;;Score:0.0040
NP::NP [ADJ CONJ ADJ N] ->
[ADJ CONJ ADJ N]
(
(X1::Y1) (X2::Y2)
(X3::Y3) (X4::Y4)
)
{PP,4894};;Score:0.0470PP::PP [NP POSTP] -> [PREP NP]((X2::Y1)(X1::Y2))
July 21, 2003 TIDES MT Evaluation Workshop
14
Examples of Learned Rules (II)
;; OF DEQUINDRE AND 14 MILE ROAD EASTPP::PP [N CONJ NUM N N N POSTP] -> [PREP N CONJ NUM N N N]((X7::Y1) (X1::Y2) (X2::Y3) (X3::Y4) (X4::Y5) (X5::Y6) (X6::Y7))
NP::NP [ADJ N] -> [ADJ N]
(
(X1::Y1) (X2::Y2)
((X1 NUM) = (Y2 NUM))
((X2 CASE) = (X1 CASE))
((X2 GEN) = (X1 GEN))
((X2 NUM) = (X1 NUM))
)
NP::NP [N N] -> [N N]
(
(X1::Y1) (X2::Y2)
((Y2 NUM) = P)
)
July 21, 2003 TIDES MT Evaluation Workshop
15
Basic XFER System for Hindi
• Three passes:– Pass1: match against phrase-to-phrase entries (full-
forms, no morphology)– Pass2: morphologically analyze input words and
match against lexicon – matches are allowed to feed into higher-level transfer grammar rules
– Pass3: match original word against lexicon - provides only word-to-word translation, no feeding into grammar rules.
• “Weak” decoding: greedy left-to-right search that prefers longer input segments
July 21, 2003 TIDES MT Evaluation Workshop
16
Manual Grammar Development
• Manual grammar developed only late into SLE exercise, after morphology and lexical resource issues were resolved
• Covers mostly NPs, PPs and VPs (verb complexes)
• ~70 grammar rules, covering basic and recursive NPs and PPs, verb complexes of main tenses in Hindi
July 21, 2003 TIDES MT Evaluation Workshop
17
Manual Transfer Rules: Example;; PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT VERB;; passive of 43 (7b){VP,28}VP::VP : [V V V] -> [Aux V]( (X1::Y2) ((x1 form) = root) ((x2 type) =c light) ((x2 form) = part) ((x2 aspect) = perf) ((x3 lexwx) = 'jAnA') ((x3 form) = part) ((x3 aspect) = perf) (x0 = x1) ((y1 lex) = be) ((y1 tense) = past) ((y1 agr num) = (x3 agr num)) ((y1 agr pers) = (x3 agr pers)) ((y2 form) = part))
July 21, 2003 TIDES MT Evaluation Workshop
18
Manual Transfer Rules: Example; NP1 ke NP2 -> NP2 of NP1; Example: jIvana ke eka aXyAya; life of (one) chapter ==> a chapter of life;{NP,12}NP::NP : [PP NP1] -> [NP1 PP]( (X1::Y2) (X2::Y1); ((x2 lexwx) = 'kA'))
{NP,13}NP::NP : [NP1] -> [NP1]( (X1::Y1))
{PP,12}PP::PP : [NP Postp] -> [Prep NP]( (X1::Y2) (X2::Y1))
July 21, 2003 TIDES MT Evaluation Workshop
19
Adding a “Strong” Decoder
• XFER system produces a full lattice• Edges are scored using word-to-word
translation probabilities, trained from the limited bilingual data
• Decoder uses an English LM (70m words)• Decoder can also reorder words or phrases (up
to 4 positions ahead)• For XFER(strong) , ONLY edges from basic XFER
system are used!
July 21, 2003 TIDES MT Evaluation Workshop
20
Testing Conditions
• Tested on section of JHU provided data: 258 sentences with four reference translations– SMT system (stand-alone)– EBMT system (stand-alone)– XFER system (naïve decoding)– XFER system with “strong” decoder
• No grammar rules (baseline)• Manually developed grammar rules• Automatically learned grammar rules
– XFER+SMT with strong decoder (MEMT)
July 21, 2003 TIDES MT Evaluation Workshop
21
Results on JHU Test Set
System BLEU M-BLEU NIST
EBMT 0.058 0.165 4.22
SMT 0.093 0.191 4.64
XFER (naïve) man grammar
0.055 0.177 4.46
XFER (strong)
no grammar0.109 0.224 5.29
XFER (strong) learned grammar
0.116 0.231 5.37
XFER (strong) man grammar
0.135 0.243 5.59
XFER+SMT 0.136 0.243 5.65
July 21, 2003 TIDES MT Evaluation Workshop
22
Effect of Reordering in the Decoder
NIST vs. Reordering
4.8
4.9
5
5.1
5.2
5.3
5.4
5.5
5.6
5.7
0 1 2 3 4
reordering window
NIS
T s
core no grammar
learned grammar
manual grammar
MEMT: SFXER+ SMT
July 21, 2003 TIDES MT Evaluation Workshop
23
Observations and Lessons (I)
• XFER with strong decoder outperformed SMT even without any grammar rules– SMT Trained on elicited phrases that are very short– SMT has insufficient data to train more discriminative
translation probabilities– XFER takes advantage of Morphology
• Token coverage without morphology: 0.6989• Token coverage with morphology: 0.7892
• Manual grammar currently quite a bit better than automatically learned grammar– Learned rules did not use version-space learning– Large room for improvement on learning rules – Importance of effective well-founded scoring of learned
rules
July 21, 2003 TIDES MT Evaluation Workshop
24
Observations and Lessons (II)
• Strong decoder for XFER system is essential, even with extremely limited data
• XFER system with manual or automatically learned grammar outperforms SMT and EBMT in the extremely limited data scenario– where is the cross-over point?
• MEMT based on strong decoder produced best results in this scenario
• Reordering within the decoder provided very significant score improvements– Much room for more sophisticated grammar rules– Strong decoder can carry some of the reordering “burden”
• Conclusion: transfer rules (both manual and learned) offer significant contributions that can complement existing data-driven approaches– Also in medium and large data settings?
July 21, 2003 TIDES MT Evaluation Workshop
25
Conclusions
• Initial steps to development of a statistically grounded transfer-based MT system with:– Rules that are scored based on a well-founded
probability model – Strong and effective decoding that incorporates the
most advanced techniques used in SMT decoding
• Working from the “opposite” end of research on incorporating models of syntax into “standard” SMT systems [Knight et al]
• Our direction makes sense in the limited data scenario
July 21, 2003 TIDES MT Evaluation Workshop
26
Future Directions• Significant work on automatic rule learning
(especially Seeded Version Space Learning)• Improved leveraging from manual grammar
resources, interaction with bilingual speakers• Developing a well-founded model for assigning
scores (probabilities) to transfer rules• Improving the strong decoder to better fit the
specific characteristics of the XFER model• MEMT with improved
– Combination of output from different translation engines with different scorings
– strong decoding capabilities
July 21, 2003 TIDES MT Evaluation Workshop
27
Debug Output with SourcespraXAnamaMwrIatalajI , rAjyapAla SrI BAI mahAvIra va muKyamaMwrI SrI xigvijayasiMha sahiwa aneka newAoM ne Soka vyakwa kiyA hE |
<the @unk,25> <, @unk,26> <governor mr. @np1,23> <brother @n,7575> <the @unk,27> <and @lex,6762> <the @unk,28> <mr. @n,20629> <the @unk,29> <accompanied by @postp,140> <grief by many leaders @np,12> <the @unk,30> <act @v,411> <be @aux,12> <. @punct,2>
gyAwavya ho ki jile ke cAroM kRewroM meM mawaxAna wIna aktUbara ko honA hE |
<the @unk,31> <be @aux,12> <that @lex,106> <voting three in four areas of counties @np,12> <oct. @lex,9153> <to @postp,8> <be @aux,12> <be @aux,12> <. @punct,2>
July 21, 2003 TIDES MT Evaluation Workshop
28
Main CMU Contributions to SLE Shared Resources
OFFICIAL CREDIT ON SLE WEBSITE "PROCESSED RESOURCES":• CMU Phrase Lexicon Joyphrase.gz (Ying Zhang, 3.5 MB)• Cleaned IBM lexicon ibmlex-cleaned.txt.gz (Ralf Brown, 1.5 MB)• CMU Aligned Sentences CMU-aligned-sentences.tar.gz (Lori
Levin, 1.3 MB)• Indian Government Parallel Text ERDC.tgz (Raj Reddy and Alon
Lavie, 338 MB)• CMU Phrases and sentences CMU-phrases+sentences.zip (Lori
Levin, 468 KB)• Bilingual Named Entity List IndiaTodayLPNETranslists.tar.gz
(Fei Huang, 54KB)
OFFICIAL CREDIT ON SLE WEBSITE "FOUND RESOURCES":• Osho http://www.osho.com/Content.cfm?Language=Hindi
July 21, 2003 TIDES MT Evaluation Workshop
29
Other CMU Contributions to SLE Shared Resources
FOUND RESOURCES BUT NO CREDIT: [From TidesSLList Archive website]• Vogel email 6/2
– Hindi Language Resources: http://www.cs.colostate.edu/~malaiya/hindilinks.html
– General Information on Hindi Script: http://www.latrobe.edu.au/indiangallery/devanagari.htm
– Dictionaries at: http://www.iiit.net/ltrc/Dictionaries/Dict_Frame.html– English to Hindu dictionary in different formats: http://sanskrit.gde.to/hindi/– A small English to Urdu dictionary:
http://www.cs.wisc.edu/~navin/india/urdu.dictionary– The Bible at: http://www.gospelcom.net/ibs/bibles/– The Emille Project: http://www.emille.lancs.ac.uk/home.htm– [Hardcopy phrasebook references]– A Monthly Newsletter of Vigyan Prasar– http://www.vigyanprasar.com/dream/index.asp– Morphological Analyser: http://www.iiit.net/ltrc/morph/index.htm
July 21, 2003 TIDES MT Evaluation Workshop
30
Other CMU Contributions to SLE Shared Resources
FOUND RESOURCES BUT NO CREDIT: (cont.)[From TidesSLList Archive website]• Tribble email, via Vogel 6/2 Possible parallel websites:
– http://www.bbc.co.uk (English)– http://www.bbc.co.uk/urdu/ (Hindi)– http://sify.com/news_info/news/– http://sify.com/hindi/– http://in.rediff.com/index.html (English)– http://www.rediff.com/hindi/index.html (Hindi)– http://www.indiatoday.com/itoday/index.html– http://www.indiatodayhindi.com
• Vogel email 6/2 – http://us.rediff.com/index.html– http://www.rediff.com/hindi/index.html [Already listed]– http://www.niharonline.com/– http://www.niharonline.com/hindi/index.html– http://www.boloji.com/hindi/index.html– http://www.boloji.com/hindi/hindi/index.htm– The Gita Supersite http://www.gitasupersite.iitk.ac.in/– Press Information Bureau, Government of India
• English: http://pib.nic.in/• Hindi: http://pib.nic.in/urdu/hindimain.html
July 21, 2003 TIDES MT Evaluation Workshop
31
Other CMU Contributions to SLE Shared Resources
FOUND RESOURCES BUT NO CREDIT: (cont.)[From TidesSLList Archive website]• 6/20 Parallel Hindi/English webpages:
– GAIL (Natural Gas Co.) http://gail.nic.in/ UTF-8. [Found by CMU undergrad Web team] [Mike Maxwell, LDC, found it at the same time.]
SHARED PROCESSED RESOURCES NOT ON LDC WEBSITE:[From TidesSLList Archive website:]• Frederking email 6/3 [announced], 6/4 [provided]
– Ralf Brown's idenc encoding classifier
• Frederking email 6/5– PDF extractions from LanguageWeaver URLs:
http://progress.is.cs.cmu.edu/surprise/Hindi/ParDoc/06-04-2003/English/ http://progress.is.cs.cmu.edu/surprise/Hindi/ParDoc/06-04-2003/Hindi/
• Frederking email 6/5– Richard Wang's Perl ident.pl encoding classifier and ISCII-UTF8.pl converter
• Frederking email 6/11– Erik Peterson here has put together a Perl wrapper for the IIIT Morphology package, so
that the input can be UTF-8: http://progress.is.cs.cmu.edu/surprise/morph_wrapper.tar.gz
July 21, 2003 TIDES MT Evaluation Workshop
32
Other CMU Contributions to SLE Shared Resources
SHARED PROCESSED RESOURCES NOT ON LDC WEBSITE: (cont.)[From TidesSLList Archive website:]• Levin email 6/13
– Directory of Elicited Word-Aligned English-Hindi Translated Phrases: http://progress.is.cs.cmu.edu/surprise/Elicited-Data/
• Frederking email 6/20– Undecoded but believed to be parallel webpages:
http://progress.is.cs.cmu.edu/surprise/merged_urls.txt– PDF extractions from same:
http://progress.is.cs.cmu.edu/surprise/merged_urls/
• Frederking email 6/24– Several individual parallel webpages; sites may have more:
www.commerce.nic.in/setup.htmwww.commerce.nic.in/hindi/setup.html mohfw.nic.in/kk/95/books1.htmmohfw.nic.in/oph.htm wwww.mp.nic.in