Plan Evaluation

Embed Size (px)

Citation preview

  • 8/9/2019 Plan Evaluation

    1/25

    In order to complete the virtual chat reference evaluation at St. Peter Colleges T.

    Dorney-Borrelli Library we will need to divide the evaluation into three separate sections: pre-

    implementation, during the beta period, and after the service had been in use for 6-8 weeks.

    Therefore, our data will come from three evaluations including reference transactions, and will

    come from three different types of sources.

    I. Data needed for evaluation and sources for data

    The first kind of information we need to obtain is whether the patrons in our academic

    library would like to see virtual reference services in addition to the traditional reference services

    we currently offer. We need to find out the extent of interest in this project before we install chat

    reference, because if patrons are unwilling to use the virtual reference services, we would be

    wasting time and money by implementing it. The source for this data will be a questionnaire

    which will yield quantitative results and will be measured by statistical analysis. The

    questionnaire will be given to the audience of library patrons in paper format and online,

    discussed below. After evaluating the results of the questionnaire, we will determine if we can

    move on to the next step of implementing the beta period of the virtual reference service.

    The second kind of data we need to obtain is whether the virtual reference services we

    implemented in the beta period worked sufficiently, determined by a second questionnaire. The

    questionnaire will be administered to patrons at the conclusion of every reference transaction.

    Therefore, the audience for this questionnaire will be somewhat more limited than the first

    questionnaire, as only actual users of the virtual reference service will be involved. After the

    completion of the questionnaire, we will evaluate user comments and suggestions, and determine

    what changes needed to be made.

  • 8/9/2019 Plan Evaluation

    2/25

    The third kind of information we need to obtain is what the reference librarians working on

    the service think of the service. Having the reference librarians' opinions is important to our

    evaluation because the librarians will provide us with a behind-the-scenes look at patron

    activities. The data will be obtained through a questionnaire given to the reference librarians 6-8

    weeks after the implementation of the service.

    II. How to elicit data from sources

    As a result of the many articles that we considered in our literature review, we discovered

    that gauging user input is extremely important to a successful virtual reference venture.

    Therefore, we decided that the creation of three questionnaires would be necessary to

    successfully compute user needs. As stated in our literature review (2007), Libraries must

    balance users' needs with the desire to obtain technology that creates easier access to the

    collection and virtual reference (p.4). In order to balance user needs it is important for library

    professionals to assess their users needs: questionnaires were used almost exclusively in the

    Library Science Journal articles discussed in our literature review. According to John Doherty

    (2006), who considered how reference librarians should interact in the online environment,

    understanding user needs is crucial (p. 101-102).

    Through intensive literature research we found that, comparison of user satisfaction is

    one way to determine the differences between virtual and face-to-face reference. A study by

    Nilsen (2006) measured user satisfaction (willingness to return) as a major criterion of the

    services success and showed that virtual reference transactions had lower rates of satisfaction

    than face-to-face reference (p. 92-93). Overall statistics (face-to-face and virtual reference

    transactions) showed that one third of patrons were dissatisfied with the service. The study also

    showed that patrons were less satisfied with email reference than they were with chat reference.

  • 8/9/2019 Plan Evaluation

    3/25

    However, the limitations to this study may have made results inconclusive as a sample. Data

    from 266 face-to-face reference transactions were compared to data from only 85 virtual

    reference transactions (Nilsen, 2006, 92-95). Nilsen (2006) determined that user satisfaction has

    more to do with the feelings users had from the interaction than with whether or not they

    received the information they were seeking (p. 94). As result we discovered that an important

    facet of the questionnaires would be how patron and librarian felt about the services they

    received in terms of technology and answers.

    We hope to use our questionnaires to determine how the participating patrons and

    librarians feel about the service: technology, quality of transaction, and ease of use.

    A. Types of Questionnaires (Please see Appendix A, B and C):

    The format of the questionnaires was one we thought about extensively. Specifically, the

    questions were written with brevity in mind. We use the word other frequently because it will

    increase the opportunity for users to include their own comments. Conciseness of the

    questionnaire was imperative because many users would be daunted by a twenty page

    questionnaire and fail to complete it, defying the reason for having the assessment.

    The questionnaires format is also important. Both paper and online versions of the

    questionnaires will be available, allowing users of varying technological skills and physical

    capability to participate. The paper form in particular will be located throughout the library and

    will serve as another means to drawing attention to the service. Awareness is key to the

    successful implementation of any new technology. There are many benefits to the online

    questionnaire format, such as cost, which depending on the amount of people taking the

    questionnaire would be minimal (Survey Monkey is free for 100 people and under). The time it

    takes to evaluate the questionnaires is also less with this computerized format. Also, users tend

  • 8/9/2019 Plan Evaluation

    4/25

    to give longer answers to open-ended questions in the online environment. Our use of simple

    uncomplicated vocabulary was intentional because the simpler the question the clearer the

    results. Complicated and flowery language tends to confuse the user and then skews the results

    (Creative Research Systems, 2006).

    The first questionnaire (pre-implementation) will focus on what users already know about

    chatting software. As stated previously, this questionnaire will be given in both paper and online

    versions so as to hear from as many types of users as possible. In addition, the questions are

    geared towards gaining an awareness of the potential users level of computer literacy, which

    will influence our choice of software. Although multiple choice questionnaires do yield a lot of

    information, for our purposes, specific answers (qualitative) are needed to guide us toward

    choosing the best chat software for application in an academic library setting. From the literature

    about virtual reference we were able to understand the important aspects to keep in mind prior to

    implementation of a new technology. The literature showed us that knowledge of the strengths

    and weakness of each type of technology such as co-browsing, download versus non-

    downloaded software, and free versus paid software are just a few of the aspects we are

    concerned with in giving the first questionnaire. Polling users gave librarians the understanding

    of what patrons want and need in new technologies.

    The second questionnaire (to be given during the beta period of the service) focused on

    the use of the service by users. Continued improvement of any technology is extremely

    important in order to increase the use of the technology. The questionnaire has only seven

    questions and therefore use of the questionnaire will be at the conclusion of each reference

    transaction during the beta and final implementation. Satisfaction was a recurring theme in the

    questionnaire because if patrons were not satisfied they would not continue to use the service.

  • 8/9/2019 Plan Evaluation

    5/25

    Continual improvement of services encourages frequent use and gives insight into what services

    users are using therefore a question was included about this issue. Justification for implementing

    these services is a battle librarians must constantly fight, but with solid quantitative data it is

    easier to do so.

    The third questionnaire was an assessment geared toward reference librarians who were

    providing the virtual reference services. It is important to understand reference librarians

    opinions because they are on the frontlines of a virtual reference transactions and therefore have

    important insight into patron needs. Numerous articles of our literature review were devoted to

    the opinions of reference librarians after implementation, because they are the easiest members

    of the library community to assess. Assessment by librarians is vital because as information

    professionals they have a keen eye as to what improvements should be made. Adaptability is

    important in the virtual world because the shelf life of many technologies is months and effort to

    extend the usefulness of services is of utmost importance.

    The third questionnaire was similar to questionnaires one and two because it includes

    both qualitative and quantitative questions to encourage comments and statistical data. Many

    articles in our literature review mentioned time being a struggle for virtual reference librarians

    and their feelings of being rushed, so a question was added to gauge the comfort level of the

    librarians during virtual reference transactions. Another important question was the librarians

    opinions of whether the service should be continued and if they thought it was a benefit to the

    users and the library. Much discussion in the literature about training led to question six, which

    asked each of the reference librarians if they felt prepared when offering virtual reference

    services. Preparation is also a key to success that must be evaluated to create a sustainable

    virtual reference project. As a result any and all suggestions that might be missed in the

  • 8/9/2019 Plan Evaluation

    6/25

    questionnaire were included in the last question. It was also important to have a question that

    addresses any question that was failed to be asked.

    III. By what means the data will be evaluated

    The data from these three surveys will be both qualitative and quantitative and the way in

    which we will evaluate them will differ. Our pre-implementation survey, which will allow us to

    measure whether or not our audience desires a virtual reference service, will yield quantitative

    results. We will measure these results through basic statistical analysis.

    Our second evaluation, which will take place during our services beta, or trial, period,

    will be more subjective in nature. The evaluations will serve to tell us what features of the

    service users like and dislike, how the users would change the service if they could, as well as

    user satisfaction; like Ward (2004), we will gauge user satisfaction by the users opinion of the

    completeness of the reference transaction. Though this evaluation will yield some basic

    demographical statistics that will need to be quantitatively evaluated, the open-ended answers

    will require qualitative analysis.

    The third evaluation will again be done using the results of a basic survey, and will help

    us determine how the librarians who are in charge of virtual reference feel about the service, and

    what changes they would suggest. This data will be subjective and thus will be analyzed through

    qualitative analysis.

    IV. How the results will be presented

    A. Evaluation 1

    The results of our three evaluations will be presented to three different constituencies, at

    different times and in different ways, and with variant purposes. The first round of presentations

    will be directed towards the librarys governing board, and will be concerned with our pre-

  • 8/9/2019 Plan Evaluation

    7/25

    implementation survey. Using the results of this survey, we will present the board with a

    multimedia report on the feasibility of implementation of virtual reference in our academic

    library. In addition to a paper-based document, which will include our literature review, the

    survey results, and selected testimonials from staff, we will also present the board with a

    PowerPoint presentation.

    The first part of the PowerPoint presentation will be based entirely upon the

    questionnaire results. Each of the surveys questions will have a dedicated slide, which will

    include results mapped out in graphical or chart form. The second part of the presentation will

    be less evaluative in nature, and contain screenshots of the proposed service. Presented in

    tandem, the evaluated survey results and the potential service will help us form a convincing

    argument for implementing a virtual reference service into our library.

    B. Evaluation 2

    Our second evaluation will be based upon the results retrieved from our second round of

    surveys, done after the services preliminary implementation, in a beta period. As in the

    University of Texas implementation, these surveys will provide us with information about our

    patrons actual use of the service, so the evaluation results will be presented to the all library

    staff associated with the service (Chapman and Del Bosque, 2004, p. 67-68). Like in our first

    round of evaluation, the librarians will be presented with the results in multimedia form: paper

    documents and PowerPoint.

    The report with which we will present the staff with will include our qualitative analysis,

    and focus on the general trends we gleaned from the survey results. Selected user testimonials

    will also appear in the report.

  • 8/9/2019 Plan Evaluation

    8/25

    The PowerPoint presentation will mirror the document in many ways, containing the

    results in graphic form and some selected quotations from users. However, it will also contain

    some suggestions for rectifying the issues brought up in the survey. Our presentation to the staff

    will conclude with a group brainstorming session, from which we hope to gain a sense of how

    the service will have to change before it goes entirely live. We will also use the usage statistics,

    gathered from the gathered from the continuing usage of the second questionnaire beyond the

    beta period, to entice patrons with our virtual reference service by hanging posters throughout

    the library. These posters, which will say something like, "84% of users agree that T. Dorney-

    Borrelli Library's Virtual Reference Service is a hit! See what the fuss is about" and "55% of our

    student body has used T. Dorney- Borrellis library's Virtual Reference. Join them!, these

    statements will not only apprise community members of our evaluation results, but also market

    the service to still unknowing or uninitiated users.

    C. Evaluation 3

    This third evaluation will also include all questionnaires given to the users at the

    conclusion of each virtual reference transaction and the librarians questionnaires. Both

    librarians and users experiences will be documented and compiled for a continuous update to the

    staff on the status of the project and any improvements that need to be made in the future.

    Evaluation of the service will be an ongoing process.

    IV. Evaluating results

    The survey conducted prior to the implementation of chat reference will allow us to

    determine whether or not our patrons would want or use chat reference in T. Dorney-Borrelli

    Library at St. Peters University. As we have uncovered in our literature review, before the

    installation of any service it is important to take users needs into consideration the needs of the

  • 8/9/2019 Plan Evaluation

    9/25

    users. In order to uncover the patrons level of interest we created three different questionnaires

    that will yield quantitative and qualitative results. The purpose of evaluating results is of course

    to improve service.

    Even though we have found, through an intensive literature review that virtual reference

    tends to not elicit many users, another reference service access point is needed. Evaluation of

    our questionnaires results will play a key role in determining the scale and effect of virtual

    reference service we will implement. The results will also guide our technology choices; apprise

    us of format issues, user interest (vs. use), awareness of the service, and other important data.

    As stated previously, the results of the first questionnaire one are most important because

    they will influence our decision to implement the service and our choice of technology. The

    questionnaire will give keen insight into users online chatting habits. Knowledge of the newest

    and most effective type of software will give the project creditability. There are positives and

    negatives of all the technology available. Though, virtual reference literature has shown us that

    the majority of patrons prefer a medium that does not have a required downloaded component.

    AOL Instant Messenger, GoogleTalk, and Yahoo Pager are excellent examples of these

    technologies which have a totally online component. However programs such as Plugoo (please

    see Appendix D for an example on a website) use a completely online component on a website

    but AOL Instant messenger is used by the librarian. Although co-browsing is not supported by

    this software, familiarity and comfort with the software go far in ensuring they will use the

    service.

    The results of all the questionnaires give libraries a plethora of information to use in

    evaluation (Ward, 2007, p.1). The reference interview is a staple of the library experience and

    the results give us the means to get at the heart of user needs. There is a large need for

  • 8/9/2019 Plan Evaluation

    10/25

    information about the correct behaviors to use in the Internet environments. Using the

    questionnaires, we will not only learn this, but will also uncover the many different ways that

    patrons connect to the Internet. Although DSL is becoming cheaper, the majority of Americans

    still connect to the internet via dial-up. However, most of the services patrons will be St. Peters

    College students. Therefore, the majority of patrons will use broadband which is provided by the

    college.

    Specifically, the results of the virtual reference transactions will be broken up into

    categories as Wendy Diamond and Barbara Pease (2001) did in their study of question types.

    They created a list of eleven different types. Depending on the questions received during

    reference transactions the number could go up or down. Examples of these categories include

    standard reference resources questions, term paper/assignment help, factual information not

    ready reference, catalog look-up, library policies procedures, non-library questions, information

    literacy, and other questions (p. 1- 5). This type of organization is needed in order to understand

    the information given. According to the literature when patrons were asked what they liked

    about chatting software they said that is was convenient, fast, and the anonymous (Walter &

    Mediavilla, 2005, 213).

    During the evaluation of the results we expect to find that, like the experience at the

    University of Texas and North Carolina University, most of the patrons enjoy the service but,

    because of technical difficulties and librarians misinterpretation of questions, a few will find the

    experience frustrating (Chapman and Del Bosque, 2004; Boyer 2001) In an attempt to remedy

    this situation prior to implementation, the virtual reference implementation team would

    encourage virtual reference librarians to be sure to use the traditional reference interview. As

  • 8/9/2019 Plan Evaluation

    11/25

    Nilsen found in her study (2006), forgoing the reference interview created unhappy patrons

    because more often then not the patrons questions were not answered (p. 96).

    An overwhelming majority of University of Texas patrons were dissatisfied with the

    virtual reference service because of hours of operation, therefore, during the beta period, virtual

    reference services will be offered during more night hours to assist patrons when they need the

    most help (outside of traditional library hours) (Chapman & Del Bosque, 2004, 72).

    Results from survey 3:

    The third kind of data we will evaluate will be what the virtual reference librarians

    thought of the service 6-8 weeks after it was finally implemented and beyond. Having the

    reference librarians' opinions of the project was an important component of our evaluation

    because they gave us their informed opinion of the entire process. Again, we wanted to

    determine if chat reference was worth the time and expense. We hope that our experience will

    differ from Chapman and Del Bosques, who reported the feelings of a frustrated librarian:

    While I am enthusiastic about the potential of chat reference as another method of interacting

    with users and reaching our students, I am disappointed that the service gets so little use, by

    either UTSA students or other UT System students. It doesn't seem like the investment of time

    and resources that librarians have invested are having any significant service outcome, in terms

    of number of patrons using the service with the energy being put into staffing the service

    (Chapman and Del Bosque, 2005, p. 74). It is important to make sure that T. Dorney-Borrelli

    Library librarians have an outlet to improve service.

    Results praising co-browsing, audio, and video features by virtual reference librarians

    would coincide with the results found in current literature. Boyer (2001) mentions and we

    detailed in our literature review, co-browsing,. . . and the up and coming audio option in virtual

  • 8/9/2019 Plan Evaluation

    12/25

    reference was touched upon as one way to bring virtual reference services closer to the real-life

    interaction and instruction received in face-to-face reference (p. 125). The information

    concerning new technologies to use with instant messaging software has been positively

    referenced by many virtual reference librarians. The third evaluation gives us the information to

    determine what all librarians think of the new service. It enables them to voice their own

    concerns without the thread of reproach and they can honestly detail their thoughts of the new

    service which will create a well oiled machine in virtual reference.

    Academic libraries will continue to implement new technologies in order to meet the

    demands of students, and the T. Dorney-Borrelli Library is part of the growing trend of

    information technology centered college libraries. We feel strongly that it is vital to reach out to

    patrons via different modes of communication. Implementation of virtual chat reference is a

    worthwhile investment if it means that students are able to connect to the librarys resources.

    Any and all access points to the collection will be a constant goal for the staff at the T. Dorney-

    Borrelli Library.

    Collaboration between Heather Turner, Erica St. Peter, Erin Dorney and Andrea Borrelli

  • 8/9/2019 Plan Evaluation

    13/25

    Appendix A:

  • 8/9/2019 Plan Evaluation

    14/25

  • 8/9/2019 Plan Evaluation

    15/25

  • 8/9/2019 Plan Evaluation

    16/25

    Appendix B:

  • 8/9/2019 Plan Evaluation

    17/25

  • 8/9/2019 Plan Evaluation

    18/25

  • 8/9/2019 Plan Evaluation

    19/25

    Appendix C:

  • 8/9/2019 Plan Evaluation

    20/25

  • 8/9/2019 Plan Evaluation

    21/25

  • 8/9/2019 Plan Evaluation

    22/25

    Appendix D:

  • 8/9/2019 Plan Evaluation

    23/25

    Appendix E:

    55% of the student body has used the librarys

    virtual reference service.

    Join them! Visit

    www.library.stpeteu.edu/virtualref

    http://www.library.stpeteu.edu/virtualrefhttp://www.library.stpeteu.edu/virtualref
  • 8/9/2019 Plan Evaluation

    24/25

    Reference List

    Boyer, J. (2001, September). Virtual reference at North Carolina State: The first one

    hundred days [Electronic version].Information Technology and Libraries,20(3), 122-128. Retrieved April 5, 2007, from ProQuest database.

    Chapman, K. & Del Bosque, D. (2004). Ask a UT system librarian: A multi-campuschat initiative supporting students at a distance [Electronic version]. Internet

    Reference Services Quarterly, 9(3/4), 55-79. Retrieved April 4, 2007, from H.W.

    Wilson database.

    Creative Research Systems. (2006). The survey system. Retrieved March 30, 2007, from

    http://www.surveysystem.com/sdesign.htm

    Cummings, J., Cummings, L. & Frederiksen, L. (2007). User preferences in referenceservices: Virtual reference and academic libraries [Electronic version]. Portal:

    Libraries and the Academy 7(1), 81-96. Retrieved April 5, 2007, from ProQuestdatabase.

    Dee, C.. & Allen, M. (2006). A survey of the usability of digital, reference services on

    Academic health science library web sites [Electronic version]. Journal of

    Academic Librarianship, 32(1), 69-78. Retrieved April 5, 2007, from H.W.

    Wilson database.

    De Groote, S., Dorsch, J., Collard, S., and Scherrer, C. (2005). Quantifying cooperation:Collaborative digital reference service in the large academic library [Electronic

    version]. College & Research Libraries 66(5), 436-454. Retrieved April 5, 2007,

    from H.W. Wilson database.

    Diamond, W. & Pease, B. (2001). Digital reference: A case study of question types in an

    academic library [Electronic version].Reference Services Review, 29(3), 210-218.

    Retrieved April 5, 2007, from Emerald database.

    Doherty, J. J. (2006). Reference interview or reference dialogue? Internet Reference

    Services Quarterly, 11(3), 97-109. Retrieved March 29, 2007, from EBSCO Host

    database.

    Graves, S. & Desai, C. (2006). Instruction via chat reference; Does co-browse help?

    [Electronic version].Reference Services Review, 34(3), 340-357. Retrieved April5, 2007, from Emerald database.

    Graves, S. & Desai, C. (2006). Instruction via instant messaging reference: What'shappening? [Electronic version]. The Electronic Library, 24(2), 174-189.

    Retrieved April 5, 2007, from Emerald database.

    Jane, C. & McMillan, D. (2003). Online in real-time? Deciding whether to offer a

  • 8/9/2019 Plan Evaluation

    25/25

    realtime virtual reference service. [Electronic version]. The Electronic Library, 21(3),

    240-246. Retrieved March 27, 2007, from Emerald database.

    Kloss, L. & Zhang Y. (2003). An evaluative case study of real-time online reference

    service. [Electronic version]. The Electronic Library, 21(6), 565-575. Retrieved

    April 5, 2007, from Emerald database.

    Nilsen, K. (2006). Comparing users' perspectives of in-person and virtual reference

    [Electronic version].New Library World107(1222/1223), 91-104. RetrievedApril 5, 2007, from Emerald database.

    Penka, J. (2003). The technological challenges of digital reference. D-Lib Magazine

    9(2). Retrieved April 5, 2007, from OCLC Computer Library Center, Inc.Stoffel, B. & Tucker, T. (2004). E-mail and chat reference: Assessing patron satisfaction

    [Electronic version].Reference Services Review, 32(2), 120-140. Retrieved April 5, 2007,

    from ProQuest database.

    Walter, V., & Mediavilla, C. (2005). Teens are from Neptune, librarians are from Pluto:

    An analysis of online reference transaction [Electronic version]. Library Trends,54(2), 209-227. Retrieved April 5, 2007, from ProQuest database.

    Ward, D. (2004). Measuring the completeness of reference transactions in online chats:

    Results of an unobtrusive study [Electronic version].References & User Services

    Quarterly 44(1), 1-11. Retrieved April 5, 2007, from H.W. Wilson database.

    Ward, D. (2003). Using virtual reference transcripts for staff training. Reference Services

    Review, 31(1), 46-56. Retrieved April 17, 2007, from Emerald database.