Sarah Weston - JUSP workshop April 2012

Preview:

DESCRIPTION

Sarah Weston’s presentation on University of Portmouth’s use of JUSP.

Citation preview

JUSP: The University of Portsmouth Experience

Sarah WestonData ManagerUniversity Library

Background

At Portsmouth we do not currently have an ERM system or usage statistics packages  Usage data is stored locally and retrieved from multiple administration accounts. Currently collecting data from 60+ different sources in relationship to electronic journals alone

Primary objective of our internal benchmarking was to evaluate our ‘Big Deals’, determine value for money and provide a sound evidence base to assist decision making

Benchmarking activities

Initial venture into ‘Big Deal’ benchmarking was around 18-24 months ago – adopted a teamwork approach

Keen to explore the extent to which we felt our deals were providing us with value for money and to look at the implications if we were to consider cancellations

At that time we did not have the benefit of JUSP and so our early activity was a little ad hoc and not necessarily the most time efficient in terms of process

Initial process

Decided to focus on three medium sized deals and adopted a two strand approach

Activity A Activity B

Obtain full title lists across multiple years and track changes

Obtain lists of PRE X subs Obtain title counts for deals Obtain costs data

Access usage from publisher platforms Amalgamate any usage from

aggregator/host platforms Remove any archive data Match usage with deal titles

Having determined the number of titles in the deal on a year by year basis, how much they cost and how much they were used ((JR1-JR1a) + Aggregator + Host) it was possible to do some cost per use calculations

Key issues

• For a three year period this was time consuming and involved lots of steps

• Obtaining accurate title lists (current and old) was not always easy

• Records of PRE X subs did not always match

• No one place to access information and data formats often differed

• Needed to remove all of the ‘weird and wonderful’

Internal Coding• Colour coding was

adopted to distinguish PRE X subs from titles within the deal

• Titles were also tracked to show at what points they entered the deal as this was important in terms of calculations

What could JUSP do for us?

We like JUSP and it is doing more for us on a month by month basis! Our needs:• On-going time series of data• Usage amalgamated from all sources• Ability to easily identify PRE X subs and titles within the

deal over time• Ability to extract title usage relating to open access, trials

etc.• Need to include some elements of print

(on our own here!)

A few of our favourite things!

Having already started to add our subscribed titles the ‘titles versus deals’ report enables us to identify titles within our deal which is our baseline for analysis and separate the PRE X titles to accurately benchmark our costs

A few of our favourite things!

Downloading a copy of the CSV file for this report you can see that some additional information has been added in terms of aggregator usage

Titles included in deals across multiple years

The titles within deals over time report gives at a glance information of how deal content has changed to facilitate accurate reporting

Publisher usage by title and year

The most valuable report for our benchmarking, eliminating a significant number of steps and providing an accurate time series upon which to import our own data

Titles and usage range

This report is likely to be important, one of our internal benchmarks has been three figure usage. The ability to see at a glance the breakdown of package usage will be helpful

Impact

The portal manipulates our usage data and significantly reduces the number of steps prior to our own analysis

Our benchmarking had focused on smaller deals, however, this will make our larger reports much easier to manage and time efficient to produce

We have not always known exactly what we have wanted and some of the more experimental reports have been particularly welcomed

Where do we go from here?

The portal provides us with an accurate record of titles and usage in a deal over time

Allows us to produce accurate reports into which we can now import cost data and subsequently calculate costs per download either within deal or at title level

From this we are able to apply some our own internal criteria for benchmarking and look at titles within a certain cost per download banding, three figure usage, Pre X subs or status of the title as determined by faculty/departments

Summary

• The portal provides us with a valuable ‘one stop shop’

• It has assisted us with our internal processes

• Continues to evolve and responds to user needs