6
A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier 1 , Laurens Van Acker 2 , Emilie Van Dijck 3 , Karin Slegers 3 , Wout Joseph 1 and Luc Martens 1 1 Ghent University / IBBT, Dept. of Information Technology, Gaston Crommenlaan 8 box 201, B-9050 Ghent, Belgium 2 Hogeschool Gent, Dept. of Engineering, Valentin Vaerwyckweg 1, 9000 Gent 3 K. U. Leuven / IBBT, Centre for User Experience Research (CUO), Parkstraat 45 box 3605, B-3000 Leuven,Belgium {toon.depessemier, wout.joseph, luc.martens}@intec.ugent.be, [email protected], [email protected], [email protected] Keywords: communication assistant, hearing-impaired, context clarification. Abstract: Children and young people who are deaf or have hearing impairments do not master the spoken and written language well. Moreover, their friends and relatives do not always speak sign language. In this research, a communication aid is developed on the Google Android mobile operating system to reduce these barriers by converting text into relevant images and movies thereby clarifying the meaning of the text. Users can select text from webpages, e-mail and SMS messages, or text can be generated using speech recognition or optical character recognition as input source. The communication tool analyzes the sentences and tries to understand the context and meaning of the text in order to select the most appropriate visual content originating from online sources like YouTube, Flickr, Google Images, and Vimeo. User tests and focus groups with deaf children and interviews with experts in the field of deaf and hard hearing people confirmed the need for a communication aid and proved the utility of the proposed tool. 1 INTRODUCTION Communication of deaf or hearing-impaired people with their surroundings is often not optimal. Nowa- days, many young deaf get a cochlear implant, a sur- gically implanted electronic device that provides a sense of sound to a person who is profoundly deaf or severely hard of hearing. Nevertheless, many deaf people (particularly children) still have a language de- lay, especially when they are born deaf. They do not catch the spoken communications due to their hearing impairment and their vocabulary is limited. This makes it difficult for them to understand writ- ten messages as well. The complex language input from the environment is often misunderstood, which may have a negative influence on the education and the communication skills of the children (Fortnum et al., 2002). Sign language is often more accessible, but has the problem that most parents of deaf chil- dren are not deaf and sign language is not their native language (Johnston, 2006). These language barriers make communication between deaf children and their families difficult. Although studies indicated that deaf people are willing to use new technology to improve their com- munication skills with hearing people (Bri` ere, 1995), so far no fully-fledged communication tools are avail- able to reduce these problems. Likewise, several au- thors refer to the importance of technology to help reduce the communication problems of deaf peo- ple (Dubuisson and Daigle, 1998), (Hotton, 2004). This paper presents a communication aid, devel- oped on the Google Android mobile operating sys- tem, that tries to reduce the communication barri- ers for people who are deaf or hearing-impaired and who do not master the spoken and written language well. Because especially deaf children have difficul- ties with communication, they constitute the target group of the communication aid and evaluated the de- veloped application through focus groups. The goal of the application is to assist these people in conver- sations with hearing people and understanding written texts, thereby improving their skills in reading com- prehension as well as making it more fun to learn new words for deaf children. The remainder of this paper is organized as fol- lows. Section 2 provides an overview of research work related to conversation assistants for deaf or

A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

A MOBILE CONVERSATION ASSISTANT TO ENHANCECOMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN

Toon De Pessemier1, Laurens Van Acker2, Emilie Van Dijck3, Karin Slegers3,Wout Joseph1and Luc Martens1

1Ghent University / IBBT, Dept. of Information Technology, Gaston Crommenlaan 8 box 201, B-9050 Ghent, Belgium2Hogeschool Gent, Dept. of Engineering, Valentin Vaerwyckweg 1, 9000 Gent

3K. U. Leuven / IBBT, Centre for User Experience Research (CUO), Parkstraat 45 box 3605, B-3000 Leuven,Belgium{toon.depessemier, wout.joseph, luc.martens}@intec.ugent.be, [email protected], [email protected],

[email protected]

Keywords: communication assistant, hearing-impaired, context clarification.

Abstract: Children and young people who are deaf or have hearing impairments do not master the spoken and writtenlanguage well. Moreover, their friends and relatives do not always speak sign language. In this research, acommunication aid is developed on the Google Android mobile operating system to reduce these barriers byconverting text into relevant images and movies thereby clarifying the meaning of the text. Users can selecttext from webpages, e-mail and SMS messages, or text can be generated using speech recognition or opticalcharacter recognition as input source. The communication tool analyzes the sentences and tries to understandthe context and meaning of the text in order to select the most appropriate visual content originating fromonline sources like YouTube, Flickr, Google Images, and Vimeo. User tests and focus groups with deafchildren and interviews with experts in the field of deaf and hard hearing people confirmed the need for acommunication aid and proved the utility of the proposed tool.

1 INTRODUCTION

Communication of deaf or hearing-impaired peoplewith their surroundings is often not optimal. Nowa-days, many young deaf get a cochlear implant, a sur-gically implanted electronic device that provides asense of sound to a person who is profoundly deafor severely hard of hearing. Nevertheless, many deafpeople (particularly children) still have a language de-lay, especially when they are born deaf. They donot catch the spoken communications due to theirhearing impairment and their vocabulary is limited.This makes it difficult for them to understand writ-ten messages as well. The complex language inputfrom the environment is often misunderstood, whichmay have a negative influence on the education andthe communication skills of the children (Fortnumet al., 2002). Sign language is often more accessible,but has the problem that most parents of deaf chil-dren are not deaf and sign language is not their nativelanguage (Johnston, 2006). These language barriersmake communication between deaf children and theirfamilies difficult.

Although studies indicated that deaf people are

willing to use new technology to improve their com-munication skills with hearing people (Briere, 1995),so far no fully-fledged communication tools are avail-able to reduce these problems. Likewise, several au-thors refer to the importance of technology to helpreduce the communication problems of deaf peo-ple (Dubuisson and Daigle, 1998), (Hotton, 2004).

This paper presents a communication aid, devel-oped on the Google Android mobile operating sys-tem, that tries to reduce the communication barri-ers for people who are deaf or hearing-impaired andwho do not master the spoken and written languagewell. Because especially deaf children have difficul-ties with communication, they constitute the targetgroup of the communication aid and evaluated the de-veloped application through focus groups. The goalof the application is to assist these people in conver-sations with hearing people and understanding writtentexts, thereby improving their skills in reading com-prehension as well as making it more fun to learn newwords for deaf children.

The remainder of this paper is organized as fol-lows. Section 2 provides an overview of researchwork related to conversation assistants for deaf or

Page 2: A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

hearing-impaired people. In Section 3, the techni-cal capabilities and input methods of the applicationare described. A number of observations and resultsbased on focus groups are shared in Section 4. Fi-nally, Section 5 is dedicated to our conclusions.

2 RELATED WORK

Although deaf people are a significant segment of thepopulation, limited research is focusing on devices orapplications designed to facilitate face-to-face com-munication between hearing persons and deaf peoplewho use sign language and cannot speak. Companieslike Oralys (www.oralys.ca) are providing tools thatenable a new type of communication with symbols.Their Communicator Mobile software for Pocket PCis designed for direct communication, but only forcommunication from the deaf person to the hear-ing person (Oralys, 2011). The idea is to replacemouse and keyboard or pen and paper with iconiccards. Although the application already contains 3800icons, users’ communication is restricted to these pre-defined icons. The deaf person selects icons whichindicate typical objects or situations by tapping thetouch screen. Next, the series of icons is convertedinto a voice message in English, French, or Spanish,which is played aloud on the device.

The influence of such an assistive technology de-signed to facilitate face-to-face communication be-tween deaf and hearing persons on social participationwas evaluated by a pilot study with deaf persons (Vin-cent et al., 2007). In this research, 15 deaf adults com-pleted a three-month field study, with pre and post in-tervention measures. The results showed a significantimprovement concerning the social participation andconversation with a hearing person after using the as-sistive technology.

Related to a conversation assistant for deaf peo-ple is the reading assistant that supports people whosuffer from dyslexia. These reading assistants allowusers to enter phrases and click the words that are un-clear in order to get the meaning of a word. Theseexplanations can be completed with pictures visualiz-ing the meaning of the word and spelling errors canautomatically be corrected. In this research, a trendwas noted for slower readers to show an increasedreading rate as a function of computer-assisted read-ing, with the opposite result for faster readers (Sorrellet al., 2007).

For video conversations on desktop computers,deaf people mainly use ‘ooVoo’, an application sim-ilar to Skype that enables video chat in HD qual-ity. Deaf people have already tried to converse using

sign language via the mobile application, but tech-nical limitations of mobile devices and cellular datanetworks hinder a smooth conversation. Since subtledifferences in gestures are important to interpret signlanguage, fluent and high quality video is essential fordeaf people. Besides, mobile phones introduce an ad-ditional difficulty regarding conversations using signlanguage: users have to hold the device while theymake gestures to converse.

3 FUNCTIONALITY

The functional requirements of the application are de-rived from interviews with experts and teachers ofdeaf children. Since deaf children have a poor knowl-edge of grammar and experience often difficulties todistinguish similar words, an explanatory dictionarybased on pictures and videos would be useful accord-ing to the interviewees. These people emphasize ahealthy balance between on the one hand a simple andeasy to use application and on the other hand addi-tional features that maybe useful for the children.

3.1 Input methods

The developed application can assist deaf childrenin interpreting sentences during daily communicationactivities. Before the analyzing and interpreting pro-cess can start, these sentences have to be entered intothe application. The first and most basic input optionis to manually enter text on the device, just like theway users type SMS messages. After selecting theoption “Keyboard” in the main menu, users get a textbox in which they can type the sentence that needsclarification. After tapping on “Translation”, the textis sent to the server for analysis using the data con-nection of the device, as discussed in Section 3.2.

The second option is to use the application as aweb aid. If the option “Internet” is chosen, an Inter-net browser is launched in which users can navigateand select (a part of) a sentence or a web form that isunclear by using the “Selection” feature of the appli-cation menu. Based on punctuation, individual sen-tences are distinguished before the analysis starts.

Users can rely on the third option, speech recog-nition, for conversations with hearing people. If thespeech recognition is successful, the spoken sentenceis converted into text, rendered in the interface, andsent for analysis to the server. Our application usesthe speech recognition software of the Android Op-erating System, which is available in different lan-guages such as English, Japanese, Chinese, etc. SinceDutch is the mother tongue of the children who tested

Page 3: A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

the application, the application was configured to usethe Dutch version of Android’s speech recognition.

For billboards, menus in restaurants, informationsigns, and other written texts that users may encounterduring their daily-life, the fourth option may be used.Users can take a picture of the incomprehensible textusing the camera of their mobile device. The softwareautomatically focusses the image and subsequentlysends it to an OCR (Optical Character Recognition)service which converts the image into readable text.Next, the recognized text can be sent to the server foranalysis or users can opt for taking a new picture if theOCR was not successful due to e.g. a blurry picture.In our application, the OCR software of WiseTrend isused (WiseTREND, 2011), but alternative solutionsare easily to integrate. This OCR service is avail-able for different languages (we used the Dutch ver-sion for testing the application) and provides featureslike deskewing, removing texture, automatic rotationof the picture, and even detecting barcodes.

If deaf children are experiencing difficulties to un-derstand a received SMS message, the fifth option canbe used. An SMS message or a part of the text of theSMS can be selected for transmission to the serverthat analyzes that piece of text. Similar, (an extractof) an e-mail can be selected and sent for clarificationto the server via the last input option.

3.2 Sentence Analysis

An important aspect of the analysis of the sentencesthat users submit is tracing the most important words,and thereby the context of the sentence. Differentsolutions are possible to detect the most importantwords of a sentence: using dictionaries to get the wordclass, part-of-speech taggers that get the word classin a more intelligent way by analyzing the sentence,frequency tables that provide information about howmany times a word is used in the language, and the-sauri that contain a set of related words (such as syn-onyms, hyponyms, and antonyms).

The current implementation of our conversationassistant uses a dictionary and a part-of-speech tag-ger. Frequency tables and a thesaurus are features forfuture versions of the application to improve the textanalysis. For the English version of the conversationassistant, the English version of WordNet can be used(this is a lexical database developed at the Universityof Princeton). However, because of the license fee ofthe Dutch version, WordNet was not incorporated inthe current version of the conversation assistant thatwas evaluated by potential end-users.

OpenTaal is a project about spelling, hyphenation,thesauri, and grammar for the Dutch language (Open-

taal, 2011). By harvesting sentences from govern-ment websites and online newspapers, the projecthas gathered a database with speech information,examples, and derivations for about 130,000 Dutchwords. Although this database is not yet publiclyavailable, the contributors of the OpenTaal project putthe database at our disposal for the retrieval of speechand word information in the conversation assistant.Based on this database, it is possible to configurewhich types of words have to be retrieved from the en-tered text (e.g., nouns and verbs can be used to searchfor explanatory pictures or videos). For performancereasons, the database is locally stored on the device ofthe end-user in the current implementation, but an on-line version is also possible. Regrettably, some wordscan have a different interpretation and word class de-pending on the sentence in which the word is men-tioned. E.g., “minor” in the sentence “Kids under 18are considered minors” is a noun, whereas in the sen-tence “I had a minor accident” minor is an adjectiveand has a different meaning.

This difficulty is solved by utilizing a part-of-speech tagger, a technique which assigns attributes tothe words of a sentence and derives the word classbased on information from the entire sentence. Thepart-of-speech tagger ‘Frog’, which is used for thispurpose, provides more accurate results than the anal-ysis based on merely the database of OpenTaal. Frogis an integration of memory-based natural languageprocessing (NLP) modules developed for the Dutchlanguage. Frog’s current version will tokenize, tag,lemmatize, and morphologically segment word to-kens in Dutch text files, and will assign a dependencygraph to each sentence (Frog, 2011). Because of sev-eral software dependencies, we opted to deploy thesoftware on a server and send requests for analyzingsentences from the mobile device.

3.3 Filtering and Watching Multimedia

After analyzing the text and filtering out unimportantwords, the remaining words or groups of words areused to query various content sources via publicly-available APIs. In the current implementation, dif-ferent sources are used to find images and videos:Google Image Search, Flickr, YouTube, and Vimeo.Moreover, new content sources, filters and hint-generating systems (which are explained later) caneasily been integrated because of the build-in pluginstructure of the application. These content sources areevaluated and receive a rating to determine the orderof the results for subsequent queries. This rating con-sists of two elements: an explicit score that the userhas set in the settings menu and an implicit score de-

Page 4: A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

pending on the number of times an item of that con-tent source is marked as “an explanatory image orvideo”.

The multimedia files that are most likely to givean explanation for the entered text are displayed atthe beginning of the list. Figure 1 shows a screenshotof the application which visualizes this list of contentitems that are relevant for the user. Only one image orvideo at a time has the focus and is prominently dis-played in the user interface. Users can navigate to thenext image or video by swiping their finger from theright to the left of the screen, or to the previous imageby swiping from the left to the right of the screen.

At the top of the screen, an overview consistingof thumbnails of the resulting media files is visual-ized for the end-users. At the bottom of user inter-face, hints in text form are displayed. These provideadditional tips to understand the meaning of the sen-tence, e.g., the infinitive of a conjugated verb. Newtips appear, until the last tip is shown. These tips canbe hidden by tapping them with your finger.

If users believe that a picture or video perfectly il-lustrates or clarifies the text, positive feedback can beprovided by tapping the green thumb in the user inter-face. Tapping this green thumb let it disappear fromthe displayed item in the user interface and registersa positive evaluation of the content source that pro-vided the item. This feedback ensures that items fromthis content source appear more prominent at the be-ginning of the resulting list for future search queries.As alternative, media sources delivering confusing orunpopular media items can be removed to personalizethe result list.

The main advantage of the developed communi-cation aid, is the possibility to extended and modifythe application easily. Figure 2 shows the high-levelarchitecture of the application and the services thatare used to process the input, analyze the texts, andprovide explanatory content. The current implemen-tation of the conversation assistant relies on Google’sservice for the speech recognition feature. Becauseof the limited computational resources of mobile de-vices, converting pictures into text using OCR is alsodone on an external server. Next, the obtained text isanalyzed by using a dictionary and a part-of-speechtagger. In the current implementation, word informa-tion is searched via the dictionary that is stored onthe device. Alternatively, this dictionary can be madeavailable as an online service to save a few megabytesof storage capacity on the mobile device. The uti-lized part-of-speech tagger is more resource demand-ing than the dictionary and is therefore deployed on aserver. Based on the results of this analysis, explana-tory information is retrieved from online video and

Figure 1: A screenshot of the application showing the sen-tence that is unclear for the user on top, the retrieved multi-media content in the middle, and additional tips for the userto understand the sentence at the bottom.

Figure 2: The high-level architecture of the application andthe services that are used to process the input, analyze thetexts, and provide explanatory content.

photo services. Each of these services can simply bereplaced by an alternative service with similar func-tionality. Additional input methods, such as a servicethat can interpret sign language, text processing ser-vices, such as thesauri, or content sources, such asvideo services for deaf people, can easily be added tothe application in the future.

Page 5: A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

4 EVALUATION

4.1 Setup

An informal subjective evaluation of the usefulnessof the application was performed by means of indi-vidual interviews with experts and three focus groups(two focus groups with deaf children and one with ex-perts). The first focus group was organized with fourdeaf pupils between 12 and 16 years old in a schoolfor deaf children who need a special secondary ed-ucation. Four deaf children between 8 and 12 yearsold participated in the second focus group. The thirdfocus group was organized for experts in the fieldof deaf children such as teachers and employees ofschools for deaf children. Three people who workwith deaf children on a daily basis participated in thisfocus group: the principal of the school, an occupa-tional therapist, and a speech therapist.

Before each focus group started, the applicationwas introduced by a slide show with screenshots ofthe applications and verbal instructions which weretranslated for the deaf children by a sign language in-terpreter. The features of the conversation assistantwere briefly discussed but not demonstrated. Next,the children were asked where, when and with whomthey would use a conversation assistant and whichfeatures are useful according to them. Finally, a mo-bile device was given to each child to try the appli-cation. During this test, we guided the children byprojecting the user interface of the application usinga pc and an Android emulator, thereby showing thesuccessive steps required to achieve the desired out-put. The children were given some small assignmentsto illustrate the situations in which the application canbe useful. Examples of these assignments are “search-ing for the meaning of a word in an SMS message ore-mail on the device”, “searching an explanation for aword that they do not understand on a webpage” and“searching for the meaning of a word of a recipe in acookbook by taking a photo of the book”.

To investigate if children are able to derive themeaning of a phrase or a word by using the conver-sation assistant, each assignment had the followingstructure. Before using the conversation assistant, thechildren were firstly asked to specify the meaning ofa difficult word (or what they think the word means)by a short questionnaire. Then, they could use theapplication to search for visual clues explaining theword; and finally they were asked again what theythink the word means. After each assignment, thechildren were asked if they could perform the taskwithout problems and if the application was a goodassistance for them during the search for the meaning

of the word. The last part of the experiment was theactual focus group in which the children could discussif the application has met their experiences, if theywould use the application (and in which situations), ifthey like the application and the input options, the us-ability, and possible improvements of the application.

4.2 Results

The children quickly mastered the touch screen andoperation of the mobile phone and the application.They learned even faster how to use the applicationthan hearing adults who used the conversation assis-tant. They liked the application and the visual ele-ments supporting the navigation (the icons, and thethumbs for providing feedback). The questionnaireshowed that the pictures and videos helped the chil-dren to understand the meaning of difficult words inthe assignment. Moreover, the application made it funand attractive to learn new words based on visual con-tent.

According to the test users, the biggest advantageof the conversation assistant is the easy input method.Entering texts by selecting an SMS or e-mail is con-sidered as very useful. Also the textual informationabout some words (e.g., the infinitive of a verb con-jugated in the past) is considered as useful for somechildren; for other children this feature might provideunnecessary information.

Since, many young deaf do not know which partof the sentence they do not understand; they cannotput their finger on the important words. Because ofthis, they believe that the developed conversation as-sistant is a more useful aid during communicationthan Google’s search engine. If end-users enter awhole sentence into Google’s search engine, they geta lot of pictures which do not explain the context. Incontrast, the communication aid filters out the impor-tant words and combines them. That is why it is expe-rienced as more efficient then a regular search engine.Furthermore, the children (and experts) identified var-ious scenarios in which such a conversation assistantwould be useful such as shopping and studying.

In contrast to the enthusiasm of the children whentrying the application, various disadvantages wereidentified by the experts and even by the childrenthemselves. They argued that a photo or a video issometimes not sufficient to understand a word or asentence. Irrelevant search results are still possibleand may also introduce confusion during the interpre-tation, especially for abstract words. Moreover, it isdifficult to explain the structure of the sentence or thesyntax by means of visual content. Another disad-vantage that was mentioned is the loading time of a

Page 6: A MOBILE CONVERSATION ASSISTANT TO ENHANCE …€¦ · A MOBILE CONVERSATION ASSISTANT TO ENHANCE COMMUNICATIONS FOR HEARING-IMPAIRED CHILDREN Toon De Pessemier1, Laurens Van Acker2,

search request. A slow data connection or the file sizeof visual content is responsible for a waiting time upto about 30 seconds.

Finally, the experts provided various suggestionsto extend the current application. To eliminate irrel-evant pictures of the traditional content sources, ex-perts plead for a database that link each word to twoor three clarifying pictures. Such a database can befilled by teachers before the beginning of a themedlesson for instance. According to the experts, exam-ples of sentences in which the difficult word is used indifferent contexts might also be useful. Furthermore,the test users would like to get a movie explainingthe word using sign language, next to the current vi-sual content; or even a feature that translates multiplesentences into sign language. Besides the need fora database with explanatory videos in sign language,this introduces the difficulty of the different dialectsof sign language that are used.

5 CONCLUSIONS

In this paper, we discussed a conversation assistant forhearing-impaired or deaf people running on a smartphone. Users can input texts by using the keyboard ofthe device, taking a picture and using OCR, record-ing speech and using speech recognition, or select-ing phrases from a webpage, e-mail, or SMS mes-sage. These texts are analyzed and the most importantwords are converted into pictures and videos originat-ing from public content sources like Google images,Flickr and YouTube.

Focus groups with deaf children and experts in thefield revealed various scenarios in which the applica-tion can be useful for deaf or hearing-impaired peo-ple. Since not many similar applications exist, we re-ceived a lot of positive reactions regarding the devel-opment of the conversation assistant. Deaf childrenwho experience serious difficulties with understand-ing texts and conversations can use the application forfilling gaps in their knowledge of vocabulary. Chil-dren with better communication skills can use the ap-plication to learn more difficult words or understand-ing conjugated verbs.

Moreover, experts recognized the usefulness ofthe conversation assistant for other target groups suchas people suffering from dyslexia or autism. The re-sults of this research can help to inspire future projectsaiming to reduce the conversation barrier betweenpeople with a disability and the society.

ACKNOWLEDGEMENTS

This work was supported by the IBBT/GR@SPproject, co-funded by the IBBT (Interdisciplinary in-stitute for BroadBand Technology), a research insti-tute founded by the Flemish Government. W. Josephis a Post-Doctoral Fellow of the FWO-V (ResearchFoundation - Flanders). The authors would like tothank the schools BuSO Sint-Gregorius and Kaster-linden for providing the opportunity to evaluate theapplication by focus groups.

REFERENCES

Briere, M. (1995). Consultation sur les technologies del’information: Resultats de sondage aupres des or-ganismes et des personnes sourdes et malentendantes.Centre Quebecois de la deficience auditive. MediasAdaptes Communications, Montreal.

Dubuisson, C. and Daigle, D. (1998). Lecture, ecriture etsurditee: visions actuelles et nouvelles perspectives.Editions Logiques, Outremont, QC.

Fortnum, H., Marshall, D., Bamford, J., and Summerfield,A. (2002). Hearing-impaired children in the uk: edu-cation setting and communication approach. Deafnessand Education International, 4(3):123–141.

Frog (2011). Dutch morpho-syntactic analyzer and de-pendency parser. Available at http://ilk.uvt.nl/frog/.

Hotton, M. (2004). Le comite sur les nouvelles tech-nologies en d’eficience auditive de l’i.r.d.p.q.: projetde recherche sur les appareils de telecommunicationspour les personnes sourdes et malentendantes.Differences, Magazine de l’Institut de readaptation endeficience physique de Quebec, 5:32–38.

Johnston, T. A. (2006). W(h)ither the deaf community?population, genetics, and the future of australian signlanguage. Sign Language Studies, 6(2):137–173.

Opentaal (2011). Welkom bij OpenTaal. Available at http://www.opentaal.org/.

Oralys (2011). Mobile Communicator. Availableat http://www.oralys.ca/en/solutions/talk/mobile_communicator.htm.

Sorrell, C. A., Bell, S. M., and McCallum, R. S. (2007).Reading rate and comprehension as a function of com-puterized versus traditional presentation mode: A pre-liminary study. Journal of Special Education Technol-ogy, 22(1):1–12.

Vincent, C., Deaudelin, I., and Hotton, M. (2007). Piloton evaluating social participation following the useof an assistive technology designed to facilitate face-to-face communication between deaf and hearing per-sons. Technology and Disability, 19(4):153–167.

WiseTREND (2011). Online OCR API Guide. Avail-able at http://www.wisetrend.com/WiseTREND_Online_OCR_API_v2.0.htm.