6
http://pro.sagepub.com/ Ergonomics Society Annual Meeting Proceedings of the Human Factors and http://pro.sagepub.com/content/57/1/2042 The online version of this article can be found at: DOI: 10.1177/1541931213571456 2013 57: 2042 Proceedings of the Human Factors and Ergonomics Society Annual Meeting Susan G. Campbell, Sarah C. Wayland, Alina Goldman, Sergey Blok and Allison L. Powell workplace Speaking the user's language: Evaluating translation memory software for a linguistically diverse Published by: http://www.sagepublications.com On behalf of: Human Factors and Ergonomics Society can be found at: Proceedings of the Human Factors and Ergonomics Society Annual Meeting Additional services and information for http://pro.sagepub.com/cgi/alerts Email Alerts: http://pro.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://pro.sagepub.com/content/57/1/2042.refs.html Citations: What is This? - Sep 30, 2013 Version of Record >> at CORNELL UNIV on November 3, 2014 pro.sagepub.com Downloaded from at CORNELL UNIV on November 3, 2014 pro.sagepub.com Downloaded from

Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

  • Upload
    a-l

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

http://pro.sagepub.com/Ergonomics Society Annual Meeting

Proceedings of the Human Factors and

http://pro.sagepub.com/content/57/1/2042The online version of this article can be found at:

 DOI: 10.1177/1541931213571456

2013 57: 2042Proceedings of the Human Factors and Ergonomics Society Annual MeetingSusan G. Campbell, Sarah C. Wayland, Alina Goldman, Sergey Blok and Allison L. Powell

workplaceSpeaking the user's language: Evaluating translation memory software for a linguistically diverse

  

Published by:

http://www.sagepublications.com

On behalf of: 

  Human Factors and Ergonomics Society

can be found at:Proceedings of the Human Factors and Ergonomics Society Annual MeetingAdditional services and information for    

  http://pro.sagepub.com/cgi/alertsEmail Alerts:

 

http://pro.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

http://pro.sagepub.com/content/57/1/2042.refs.htmlCitations:  

What is This? 

- Sep 30, 2013Version of Record >>

at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from

Page 2: Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

Speaking the user’s language: Evaluating translation memory software for a linguistically diverse workplace

Susan G. Campbell, Sarah C. Wayland, Alina Goldman, Sergey Blok

University of Maryland Center for Advanced Study of Language

Allison L. Powell Corporation for National Research Initiatives

Translation technology, including translation memory software, can make the work of professional translators more efficient and effective. Using a multi-disciplinary approach with converging methods, we evaluated translation memory software in the context of a work environment in which it was to be deployed. Our evaluation had three phases: (1) a contextual inquiry of translation practice, (2) a usability test of the candidate software package, and (3) a heuristic evaluation to evaluate the ways that the software package matched or did not match the needs identified in the contextual inquiry. From this, we derived a set of recommendations for incorporating translation technology into the organization’s processes. Suggestions included incorporating shared translation memory in the workflow and working with the software vendor to improve text handling for non-European languages. Problems with both the software under evaluation and standard workplace software highlighted the need for developers to evaluate language technology with non-Roman orthography.

Translation has changed enormously in the past few years with the addition of new technology and the wider adoption of tools that had previously been used only in specialized contexts. Most media attention focuses on the promise of machine translation, a fully automated way of producing a translation from a source document, but that is not the only translation technology in use among professional translators. Rapidly maturing tools for augmenting the work of humans are being widely adopted by translators working in environments ranging from teams embedded in large organizations to individual freelance translators using a personally-owned computer.

Most translation technology is not generally affordable for individuals or small business owners, and is usually purchased as an enterprise application for translators in a large organization. Because of the specialized nature of translation technology and because widespread adoption is still in progress, the usability of translation technology has not been thoroughly evaluated.

The focus of this paper is on Translation Memory software—a database that allows the translator to reuse previously translated material. The software works by retrieving stored segments (phrases, sentences or larger text units) that have been previously translated. The corresponding translations are filled in and can be further edited or accepted as-is by the translator. A probabilistic matching algorithm is used to retrieve source segments and the retrieval algorithm can be fine-tuned depending on text source type or translator preference. The software we used in this investigation automatically segmented our documents into sentence-level segments and matched them against a database of previously-translated sentences.

Our team worked with a medium-sized, geographically dispersed organization preparing to deploy translation technology software. Our task was to evaluate that software in the context of their work environment. It was important to recognize that translators working for the organization came from a wide range of linguistic backgrounds and generated a

wide range of translated products in a large number of languages, creating a linguistically diverse workplace.

The client organization receives documents in a variety of source languages and delivers translations into a target language or languages specified by a particular document’s customer. Many organization-internal procedures exist in order to ensure that the products meet the organization’s internal quality standards and the customer’s specifications. The specific use case we investigated was one of those organization-internal intermediate steps, called quality control (QC) by our client, in which experienced reviewers edit draft translations prior to delivery.

We first investigated the organization’s needs with a contextual inquiry (Holtzblatt & Jones, 1993), then followed up with a usability test of a proposed software tool, and finally performed a modified heuristic evaluation of the software used in the usability study (Nielsen, 1994). Each of these techniques helped us identify problems to be addressed before translation technology was rolled out to the entire organization. In this paper, we describe our implementation of each of these methods, our findings, and the implications for other practitioners engaged in evaluating language technologies.

STUDY 1: CONTEXTUAL INQUIRY In order to understand the characteristics of the

organization we were working for, we conducted a contextual inquiry (Holtzblatt & Jones, 1993) in which translators performed tasks that were similar to their usual work tasks while we observed and asked questions. During the investigation, the translators performed quality control tasks, where they edited pre-existing translations to bring them up to the organization’s standards.

Methods We interviewed six translators from the organization, two

who primarily translated from English to Dari and four who primarily translated from Arabic to English. All of the

Cop

yrig

ht 2

013

by H

uman

Fac

tors

and

Erg

onom

ics

Soc

iety

, Inc

. All

right

s re

serv

ed. D

OI 1

0.11

77/1

5419

3121

3571

456

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 2042

at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from

Page 3: Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

participants were native speakers of a language other than English but were highly proficient in English.

We asked participating translators to perform several short quality control tasks based on specific open-source materials that we provided. The materials were chosen by the research team to be as representative as possible of operational texts while still being short enough to allow for the quality control to proceed in a short time window. Our materials included scanned documents (e.g., faxes), electronic documents (e.g., documents generated by a word processor), and recorded audio (e.g., news broadcasts). In order to mirror the current work conditions in the organization, the contextual inquiry tasks did not involve any specialized translation software; they simply asked participants to do quality control tasks using the organization’s most common text editor software. The documents provided a framework for discussion, but we were not interested in the actual revised translations.

Our contextual inquiry protocol included questions about participants’ overall workflow as well as questions about their use of, and opinions about, specialized translation software.

It is important to note that we focused on quality control tasks, not the full translation workflow. In quality control tasks, a reviewer is presented with the original source material and a complete translation produced by another translator. The quality control reviewer verifies that the translation is fluent, accurate, and complete. If needed, the reviewer may make corrections to the original translation.

One interviewer proctored the tasks and administered the contextual inquiry, while two other team members took notes. After the participants had completed the tasks, we asked them to retrospectively think aloud and explain how they understood the tasks they had performed. We used retrospective think-aloud rather than concurrent think-aloud in order to minimize the interference produced by vocalizing in English while performing a task in another language (O’Brien, 2005).

We also collected questionnaire data on the participants’ language history and usual technological setup.

Results We found that participants had a very diverse linguistic

background and used a variety of tools. Translators who worked on-site for the organization had a different set of tools from translators who worked as contractors at home or at a different work site. Contractors were more likely to be familiar with translation memory and translation technology tools than were direct employees.

We found that participants were very adaptable and could deal with many different kinds of software, but that there were improvements that could help make the existing workflow more functional. Participants provided a wide array of comments about their work process and their use of technology; they were particularly interested in more online dictionaries, and better non-English resources. They also expressed a preference for more computer monitors (at least two).

The biggest software-related challenge we observed was the handling of right-to-left text (e.g., Arabic) and bi-

directional text (e.g., Arabic with numbers/dates, which are written left-to-right in Arabic) in word processing and presentation software. Even experienced translators described the appropriate handling of right-to-left text as a challenge.

The participants also provided valuable insight into their work process, especially the challenges when working collaboratively on a large assignment. When working in parallel with other translators on parts of a text and across translations, translators need to use consistent vocabulary and transliteration within their own translations. Spelling the names of places, people, and technical terms the same way both within and across documents helps to ensure accuracy and consistency. Several translators in the contextual inquiry recommended the use of translation memory technology and sharable, editable glossaries to improve consistency.

We developed a set of recommendations based on the results of the contextual inquiry that included additional training, the use of standard non-English keyboard layouts, the standard use of two monitors so translators could more easily view resources at the same time as they were working with their translated documents, and improvements to workflow software. Our findings also indicated that translation technology could help the translators if it provided (1) automated formatting that matched target language formatting to source text formatting, (2) transliteration guides, and (3) terminology lists, including translation memory and specialized vocabulary.

STUDY 2: USABILITY TEST The contextual inquiry study had given our team insight

into the organization’s work process and confirmed that terminology standardization was challenging. One of the goals of the project was to evaluate a potential class of tools to be added to the workflow to improve efficiency and effectiveness of translation. In the next part of our investigation, we conducted a usability test on translation memory software. Our evaluation was intended to identify user requirements, potential challenges to technology adoption, and points where the organization might wish to work with the software vendor to make the software more usable in the current work environment. Though testing experienced users would have been more useful from the standpoint of predicting the behavior of experienced users, it was not practical because the software had not yet been adopted. In addition, testing with experienced users would not have identified as many barriers to adoption.

The main questions the usability study was designed to answer related to the standard three facets of usability from the ISO standard: efficiency, effectiveness (or output quality), and satisfaction (Abran, Khelifi, Suryn, & Seffah, 2003). Assessing efficiency involved finding the pain points in the users’ interactions with the software: where did they make errors, and where did they get stuck? Effectiveness was related to the quality of each person’s work output; did people who used the software produce good translations? Finally, satisfaction related to how people felt about their interaction with the software.

To that end, we asked participants to use a proprietary translation editor that contained features that had been

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 2043

at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from

Page 4: Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

designed to remove some of the same problems that the contextual inquiry had identified with the current software setup. This software included a robust translation memory module designed to facilitate translation work, especially for phrases.

Methods In this evaluation, as in the contextual inquiry, we

investigated one set of translators who worked primarily translating between Arabic and English and another set of translators who worked primarily translating between Dari and English.

Sixteen translators were recruited by the client organization using their normal recruitment process, but one was redirected to the Contextual Inquiry study because of a software failure. No participant was involved in both studies. Of the remaining translators, seven worked primarily with Arabic, and eight worked primarily with Dari. Their median age was 42, and 5 of the 15 were female. The participants had a wide range of translation experience, though most were quite experienced (Mean = 13 years, Range = 9-36 years). None had prior experience with the software.

The task was performed in a lab setting, where the software had been installed on a computer that was not connected to the Internet.

At the beginning of the session, participants completed several questionnaires, including a language history questionnaire, a language technology questionnaire, and the Translation Technology Attitudes Survey (Dillon & Fraser, 2006).

Three documents were selected for this evaluation from the set used for the contextual inquiry. Two were written documents: one had no formatting or special language, and one had extensive formatting and technical language. The third document was a translation of a transcript created based on an audio file.

Participants were asked to perform a quality review as they normally would, but they were also asked to incorporate an increasing number of software features into their work process. The moderator started by explaining the basics of the editor. We then allowed the participants to explore the software for a few minutes using a practice document. Participants were then asked to perform a quality control edit on the simplest of the three documents. Next, they were shown the formatting features of the editor, and asked to perform a quality control edit of the second, more complicated, document. Finally, participants were asked to perform a quality control edit on the audio document and to add a word to the terminology management system that was included with the editor. In between each of these tasks, participants completed an After-Scenario Questionnaire (ASQ; Lewis, 1991). At the end of the session, participants completed the System Usability Scale (SUS; Brooke, 1996).

A final quality review of the documents the translators produced in this phase is described in the Study 2A section.

Results Participants were generally positive about the features of

the proprietary translation editor that was evaluated. Nearly all

translators indicated interest in using both the translation memory and vocabulary management tools. In addition, users were positive about the side-by-side presentation of source text and translation in the editing environment as well as the built-in formatting correctness-checking feature.

Although participants were enthusiastic about the potential of the features, our team was able to identify several usability issues that would hinder users’ ability to perform essential tasks, including issues with layout and controls, ease of use, and system performance.

We devised a coding scheme that allowed us to characterize the types of errors and pain points that participants reported. This coding allowed us to characterize the obstacles to efficiency that existed within the software, such as standard shortcut keys (ctrl-V) that did not work, bugs in the way text displayed (such as system objects being edited by accident), and visual design features that made particular actions hard to accomplish. Users frequently expressed a preference toward being able to move the source and target windows (which were fixed to the left and right of the screen, respectively), as well, and to customize their workspace for their particular language or capabilities. The participants were generally positive about the potential of the software, but generally negative about the specific features as they were implemented.

User satisfaction was assessed using several different scales, and we report here the results for the SUS, which was completed at the end of the session, and the ASQ, of which three were completed by each participant, one after each task. The scores are given in Table 1. The SUS scores are on the low end of marginal acceptability on the scale provided by Bangor, Kortum, and Miller (2008).

Table 1. Mean scores on satisfaction measures for the tested software. Numbers in parentheses are the range of possible scores, and higher scores are better.

Scale N Mean SDASQ 1 (2-10) 15 7.3 2.2 ASQ 2 (2-10) 14 6.6 2.3 ASQ 3 (2-10) 12 8.0 2.6 SUS (0-100) 15 56.5 18.7

To address the question of effectiveness, we conducted a

separate mini-study in order to assess whether the quality of the translations produced in this usability study was related to the use of the software, so those results are reported in the following section.

STUDY 2A: QUALITY OUTCOMES After we had collected the participants’ corrected

translations in the usability study, we asked a smaller set of reviewers, two per language, to assess whether the revised translations were in fact better than the original translations. There were multiple questions we hoped to address with this final review. We wanted to confirm that participants were able to complete the QC tasks using this software in a way that was consistent with the usual QC process. In addition, we were aware that participants were in the process of learning the new software while participating in our experiment, and we wanted to measure the amount of work they were able to complete in

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 2044

at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from

Page 5: Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

the time allowed for the experiment and to gauge whether the amount of time the participants spent using the software was related to their outcomes on translation quality. We did not compare the revised translations to quality-reviewed translations produced using the organization’s typical work process, though that would be useful for future experiments.

Methods Four additional reviewers were recruited to perform a

quality review of the output of the usability study. Two of them were Arabic translators and two of them were Dari translators. The final reviewers were asked to assess the fluency and accuracy of the original translations and the reviewed translations produced by the study participants. The final reviewers also performed an independent quality review of the original translations, making any corrections they deemed necessary. They performed the evaluations at their normal work location, using their usual tools.

Results Final reviewers differed in the degree of editing that they

proposed for each translated segment (sentences in this particular software), and their reviews for identical segments did not correlate with each other. For each language, we found that one of the final reviewers was more strict and the other more willing to accept the translations as they were. We concluded that there was little agreement between reviewers on what constituted a good translation, which reinforces Denkowski and Lavie’s (2010) findings on translation quality ratings. However, the final reviews allowed us to confirm that the study participants used the software in an expected way. Source documents that had been identified by the final reviewers as needing greater improvement tended to be edited substantially during the study, both in terms of translation segments that were edited and the number of edits that were made.

STUDY 3: HEURISTIC EVALUATION After we finished collecting the usability data, we

discovered that, although our coding scheme had allowed us to characterize participant comments as obstacles to efficiency, bugs, or design flaws, we did not have an effective organizing framework for noting the underlying issues. We also noted that some participants commented on particular aspects of the interface, while others did not notice those aspects. We chose to use an adaptation of Nielsen’s (1994) heuristic evaluation methodology to produce a more coherent account of the types of problems the participants had encountered with the interface.

Methods The research team investigated the software for

approximately two hours without any participants present. The team walked through the tasks the participants had completed and isolated the issues reported by participants. The team then characterized the identified interface problems and which of Nielsen’s heuristics they pertained to.

Results The team found that the use of Nielsen’s heuristics

provided an effective way to organize the interface problems that were identified. The team organized the interface problems based on the specific aspect of the interface to which they applied, for example, problem with documentation, problems with editing tools, and problems with terminology management tools. Within that organization, categorizing problems with the heuristic(s) that applied provided a useful overview of the underlying interface problems.

Including both participant comments and the outcome of research team testing, we found more than thirty violations of Nielsen’s heuristics, such as a lack of progress bars (violating the admonition to always display system status), and a nonstandard tag-based markup language (violating the principle of following standards where they exist). Some example heuristic violations are given in Table 2. We found again, as in the contextual inquiry, that language directionality was a common source of invisible (and thus difficult to ameliorate) error. We also were able to identify some bugs that we had been unable to characterize as system or user error in the usability testing sessions. The team found that the use of Nielsen’s heuristics provided an effective structure for categorizing interface problems.

Table 2. Example violations of Nielsen's (1994) heuristics. Visibility of System Status Software did not display progress bars for any operation. Error preventionFormatting tags should not be editable, but it is possible to accidentally click inside one and edit it, causing the program to crash. Recognition rather than recall Important buttons were labeled only with difficult-to-understand icons. The button for a particular function was not designed the same way between different views, requiring users to recall that different buttons corresponded to the same action in different views. Help users recognize, diagnose, and recover from errorsLanguage directionality problems (right-to-left versus left-to-right) can cause hidden errors; the most common issues are that a language’s default directionality is set to the wrong value in the system or that the directionality of a source document was set incorrectly before it was imported.

DISCUSSION The three methods we used to investigate the workplace

needs and the software proposed to meet them allowed us to make recommendations to the client about the specific outcomes likely in deploying this translation memory software and provided us with insights into how to do this kind of evaluation more effectively in the future.

Situation-specific Outcomes We found that it was crucial to understand the

organization’s context and workflow prior to attempting to evaluate a specific piece of software. This step may seem obvious, but the contextual inquiry provided the necessary context for understanding which of the issues uncovered in this evaluation were specific to the software tool being

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 2045

at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from

Page 6: Speaking the user's language: Evaluating translation memory software for a linguistically diverse workplace

evaluated, and which issues were more intrinsic to the overall workflow.

We provided recommendations to the client based on our study that included notes to pass on to the software vendor for improvements to the usability of the software. Based on the contextual inquiry, we could also provide process-based recommendations for improving the general workflow for quality control at the organization.

We found that the main challenge of language software adoption was language support, especially where non-European languages are concerned. Most language software appears to be designed for translation between European languages with Roman or Cyrillic orthography, and even when non-European languages are officially supported, the language tools may be more difficult to use for languages with different orthographies from the languages the software was designed to support.

Usability Practitioner Recommendations The convergence of these three evaluation methods, a

contextual inquiry, a usability study, and a heuristic evaluation, allowed us to examine the software and the translators’ performance from several different points of view and allowed us to collect a broad range of data and characterize common problems.

The combination of contextual inquiry and usability study allowed us to note which problems were preexisting; no software handled right-to-left text appropriately, even the software currently being used for text editing. Performing the contextual inquiry first also informed the design of the usability study by showing which areas were problematic in the existing workflow.

The combination of heuristic evaluation and usability study allowed us to classify problems that users identified as universal problems or idiosyncrasies. Performing the usability study before the heuristic evaluation, though it is not common practice, allowed us to focus the heuristic evaluation on problems that the users had previously identified.

Lessons Learned Because our evaluation was commissioned as part of a

process designed to determine how to best deploy new software in an operational context, we were only able to evaluate the software with novice users. It would be useful to repeat the study after the translators have been using it for a while so we can more accurately assess how it fits into the workflow, and what features were most useful.

It would also be useful if the protocol had allowed more time for training on the new software. While it is important to observe people when they are first learning, it is also important to observe them once they have learned to work around problems. Our investigation at the beginning of the evaluation process allowed us to determine which issues needed to be addressed prior to deployment, but it was difficult, if not impossible, to gauge how effective the software would be once users were comfortable with the new technology. Indeed, it would be quite instructive to see which features were most useful to experienced users. This need to test experienced users, of course, reflects the Catch-22 of

usability testing for software adoption; one must first adopt the software for at least some portion of the organization in order to test it with experienced users.

Last, but not least, the time constraints of our evaluation did not allow the translators who participated in our study to spend the amount of time that they wanted to take with the task. Future evaluations should allow them to spend more time on each task so they have enough time to complete their work.

ACKNOWLEDGEMENTS Our colleague Evelyn Browne provided invaluable

assistance with data collection and analysis. This work was funded by the National Virtual Translation Center (NVTC). None of it would have been possible without the guidance, support, and time that Oksana Lassowsky, Technical Director, NVTC, and members of her technical team gave us. NVTC’s translators were unfailingly helpful and professional, even when what we asked them to do seemed exceedingly odd. We are also grateful to our two anonymous reviewers for insights that greatly improved this paper.

Disclaimer This material is based on work supported with funding

from the United States Government. Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors and do not necessarily reflect the views of the University of Maryland, College Park, and/or any agency of the United States Government.

REFERENCES Abran, A., Khelifi, A., Suryn, W., & Seffah, A. (2003). Usability

meanings and interpretations in ISO standards. Software Quality Journal, 11(4), 325-338. doi:10.1023/A:1025869312943

Bangor, A., Kortum, P. and Miller, J.A. (2008). The System Usability Scale (SUS): An Empirical Evaluation. International Journal of Human-Computer Interaction, 24(6), 574-594. doi: 10.1080/10447310802205776

Brooke, J. (1996). SUS-A quick and dirty usability scale. In Jordan, P.W., Thomas, B., Weerdmeester, B.A., & McClelland, I.L. (Eds.), Usability evaluation in industry, 189-194.

Denkowski, M., & Lavie, A. (2010). Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks. In Proceedings of the American Machine Translation Association (AMTA) 2010, Denver, Colorado.

Dillon, S., & Fraser, J. (2006). Translators and TM: An Investigation of Translators' Perceptions of Translation Memory Adoption. Machine Translation, 20(2), 67-79.

Holtzblatt, K., & Jones, S. (1993). Contextual inquiry: A participatory technique for system design. In Participatory design: Principles and practice, 180-193.

Lewis, J. R. (1991). Psychometric evaluation of an after-scenario questionnaire for computer usability studies: The ASQ. SIGCHI Bulletin 23, 73-78.

Nielsen, J. (1994). Heuristic evaluation. In Usability inspection methods. John Wiley & Sons, New York, NY, 25-62.

O'Brien, S. (2005). Methodologies for Measuring the Correlations between Post-Editing Effort and Machine Translatability. Machine Translation, 19(1), 37-58.

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 2046

at CORNELL UNIV on November 3, 2014pro.sagepub.comDownloaded from