Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
AP19-OLR ChallengeThree Tasks and Their Baselines
Zhiyuan Tang, Dong Wang*, Liming Song
Center for Speech and Language Technologies, Tsinghua University & SpeechOcean
APSIPA ASC, Lanzhou, China
Contents> > > > >
Introduction1
Data profile2
Baseline systems and results3
Conclusions4
Ranking list5
IntroductionOriental languages
From wikipedia.org
IntroductionOriental language research
Multi and Minority Language ASR (M2ASR)http://m2asr.cslt.org
Goals of M2ASR• To solve technical
difficulties• To build practical systems• To provide open and free
data and benchmarks
IntroductionOriental Language Recognition Challenge
• 7 languages
• Transcriptions
2016 • 10 languages
• Transcriptions
• Shorter segments
2017 • 10 languages
• Short-utterance
• Confusing-language
• Open-set
2018 • 10 languages
• +3 languages
• +3 languages
• Short-utterance
• Cross-channel
• Zero-resource
2019
IntroductionOriental Language Recognition Challenge 2016
AP16-OLR Challenge at APSIPA 2016, Jeju, Korea
IntroductionOriental Language Recognition Challenge 2016
Rank Team Cavg *100 EER (%) minDCF IDR (%)
1 NUS and I2R (1), Singapore 1.13 1.09 0.0108 97.56
2 NUS and I2R (2), Singapore 1.70 1.02 0.0101 97.60
3 USTC, China 1.79 2.17 0.0205 96.94
4 NTUT, Taiwan, China 5.86 5.88 0.0586 87.02
5 MMCL_RUC, China 6.06 6.16 0.0610 86.21
6 PJ-Han, Germany 14.00 17.34 0.1365 77.65
7 NTU, Singapore 14.72 17.44 0.1657 71.44
8 XS-CO, China 36.99 40.26 0.3924 31.91
9 TLO, China 50.00 53.34 0.4999 12.37
AP16-OLR Challenge Results
IntroductionOriental Language Recognition Challenge 2017
AP17-OLR Challenge at APSIPA 2017, Kuala Lumpur, Malaysia
IntroductionOriental Language Recognition Challenge 2017
AP17-OLR Challenge at APSIPA 2017, Kuala Lumpur, Malaysia
IntroductionOriental Language Recognition Challenge 2017
There are totally 31 teams that registered this challenge. Until the deadline of extended submission (Dec. 12),19 teams submitted their results completely, 6 teams submitted partially or responded actively, and 6 teams did not response after the data download.
IntroductionOriental Language Recognition Challenge 2018
Totally 25 teams for registration.17 teams in the ranking list.
IntroductionOriental Language Recognition Challenge 2018
IntroductionOriental Language Recognition Challenge 2018
IntroductionOriental Language Recognition Challenge 2018
IntroductionOriental Language Recognition Challenge 2018
IntroductionOriental Language Recognition Challenge 2019
Organization CommitteeZhiyuan Tang, Tsinghua University Dong Wang, Tsinghua University
Qingyang Hong, Xiamen University Ming Li, Duke-Kunshan UniversityXiaolei Zhang, Northwestern Polytechnical Uni. Liming Song, SpeechOcean
IntroductionOriental Language Recognition Challenge 2019
Short-utterance identification taskThis is a close-set identification task, which means the language of each utterance is among the known 10 target languages. The utterances are as short as 1 second.
Cross-channel identification taskThe test data is in different channels from the training set.
Zero-resource identification taskNo resources are provided for training before inference, but several reference utterances are provided for each language.
Contents> > > > >
Introduction1
Data profile2
Baseline systems and results3
Conclusions4
Ranking list5
Data profileAP16-OL7 and AP17-OL3
Contents> > > > >
Introduction1
Data profile2
Baseline systems and results3
Conclusions4
Ranking list5
Baseline systems and resultsX-vector
"Deep Neural Network Embeddings for Text-Independent Speaker Verification", David Snyder, Daniel Garcia-Romero, Daniel Povey and Sanjeev Khudanpur, Interspeech 2017
Baseline systems and resultsResults
Contents> > > > >
Introduction1
Data profile2
Baseline systems and results3
Conclusions4
Ranking list5
Conclusions< < < < < <
• We presented the data profile and the evaluation plan of the AP19-OLR challenge.
• We published the baseline systems based on the x-vector model.
• All the data resources are free for the participants, and the recipes of the baseline system can be freely downloaded.
Contents> > > > >
Introduction1
Data profile2
Baseline systems and results3
Conclusions4
Ranking list5
OLR Challenge 2019
Ranking
OLR Challenge 2019
Totally 45 teams for registration.20+ teams in the ranking list.
Task 1Short-utterance
Task 1: short-utterance
Overview
0
5
10
15
20
25
30
35
40
45
50
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Cavg% EER%
Ranking Team Name Institute Participants
1Innovem-Tech
因诺微科技(天津)有限公司
江海,王化,刘俊南,刘文龙,谷铭佳,李卓茜
2SSLab
Samsung Research Institute China - Beijing宋黎明、王奔旭
3xmuspeech 厦门大学
李铮,赵淼,李静,郅艺铭,李琳
4Paic-LiangpiXishi
Ping An Technology (Shenzhen) Company Limited
Ruizhang Wang、YangliWang、Chong Qin、Yayun Zhou
5 madeinchina 中国传媒大学 周晓星、冯芝金、神瑞雪
Task 1: short-utterance
Top 5
Task 1: short-utterance
Top 5
Ranking Team Name Cavg EER%
1 Innovem-Tech 0.0212 2.47
2 SSLab 0.0800 7.95
3 xmuspeech 0.0818 8.65
4 Paic-LiangpiXishi 0.0862 8.81
5 madeinchina 0.1031 12.94
Task 2Cross-channel
0
10
20
30
40
50
60
70
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Cavg% EER%
Task 2: cross-channel
Overview
Task 2: cross-channel
Top 5
Ranking Team Name Institute Participants
1 SSLab Samsung 宋黎明、王奔旭2 anonymous anonymous anonymous
3xmuspeech 厦门大学
李铮,赵淼,李静,郅艺铭,李琳
4BIT-commlab-asr
Beijing Institute of Technology
Qingran Zhan, Shixuan Du, Liqiang Zhang, Shuang Liang, Xiang Xie.
5听风者
广州国音智能科技有限公司
陈琦、刘敏、王泽龙、许敏强
Task 2: cross-channel
Top 5
Ranking Team Name Cavg EER%
1 SSLab 0.2008 20.24
2 anonymous 0.2713 27.69
3 xmuspeech 0.2741 27.44
4 BIT-commlab-asr 0.2937 29.21
5 听风者 0.3402 34.93
Task 3Zero-resource
Task 3: zero-resource
Overview
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9
Cavg% EER%
Ranking Team Name Institute Participants
1xmuspeech 厦门大学
李铮,赵淼,李静,郅艺铭,李琳
2Royal Flush
浙江核新同花顺网络信息股份有限公司 胡新辉,王鼎
3Paic-LiangpiXishi
Ping An Technology (Shenzhen) Company Limited
Ruizhang Wang、Yangli Wang、Chong Qin、Yayun Zhou
4 Siplab-IITHIndian Institute of Technology Hyderabad
Shaik Mohammad Rafi,Sri Rama Murty Kodukula,Gundluru Ramesh
5 madeinchina 中国传媒大学 周晓星、冯芝金、神瑞雪
Task 3: zero-resource
Top 5
Task 3: zero-resource
Top 5
Ranking Team Name Cavg EER%
1 xmuspeech 0.0113 1.13
2 Royal Flush 0.0777 7.57
3Paic-LiangpiXishi 0.1098 10.7
4 Siplab-IITH0.1837 18.98
5 madeinchina 0.2129 21.5
Thanks a lot.
http://olr.cslt.org