Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
1
REVIEWING
SIGNIFICANCE 3.0 a framework for assessing museum, archive
and library collections’ significance,
management and use
January 2018 ©Caroline Reed
Caroline Reed Museum and Heritage Consultant
2
CONTENTS
1 INTRODUCTION 3
2 THE METHODOLOGY 4
3 THE COLLECTIONS REVIEW PROCESS 5
4 THE SIGNIFICANCE ASSESSMENT PROCESS 14
5 FURTHER SOURCES AND RESOURCES 23
APPENDICES 1 Planning and managing a successful Significance Assessment Session
2 Significance Assessment workshop – case study
25
3
REVIEWING SIGNIFICANCE 3.0 a framework for assessing museum, archive and library collections’ significance,
management and use
1 INTRODUCTION
Assessing the significance and potential of museum collections is increasingly recognised as an integral
element of strategic collections management and development - and essential when prioritising resource
allocation and seeking external funding or support.
In the UK, Collections Review has been newly included in version 5 of the Collections Trust’s SPECTRUM
collection management standard, published in September 2017 (See Section 5 FURTHER SOURCES AND
RESOURCES).
Reviewing Significance 3.0 provides analytical tools to aid museums, archives and libraries of all sizes
through two processes:
Collections Review
• understanding current levels of usage
• assessing current standards of care and management
• reviewing care and management priorities in the light of material’s use value and significance
Significance Assessment
• assessing collections’ meaning and value for key groups of current and potential users
The two processes are complementary and designed to be used either together or independently.
The original Reviewing Significance model for collections assessment was commissioned in 2009-2010 for
Renaissance East Midland (the then regional cultural agency for museums). It was developed by consultant
Caroline Reed and a team of curators from University College London’s Museums and Collections, Jayne
Dunn, Subhadra Das and Emma Passmore. The model was published online in 2010 and Caroline published
an updated version Reviewing Significance 2.0 on the UK Collections Trust website in 2012.
In 2011 Screen Heritage UK and Film London commissioned Caroline Reed and film archivist David
Cleveland to adapt Reviewing Significance for application to moving image collections – including material
that archives can’t currently view because of its fragility or format. The moving image Collections
Assessment Toolkit was published online beside The Screen Heritage UK moving image collections
handbook (See Section 5 FURTHER SOURCES AND RESOURCES).
This 2018 edition, Reviewing Significance 3.0, extends the scope of the methodology for application to
archive and library as well as museum collections and draws extensively on work undertaken in the archive
at London’s Southbank Centre in 2013 and across the museum, archive and library collections of The Royal
College of Surgeons of England (RCSE) between 2014-2016. A number of modifications to the Collections
Review Process tools have been made with very helpful input from the RCSE project team led by Thalia
Knight, Director of Library and Surgical Information Services, and Beth Astridge, who was seconded to
manage the project from her then post as Head of Records and Archives.
4
2 THE METHODOLOGY
The Reviewing Significance methodology comprises two processes: Collections Review and Significance
Assessment. They are designed to be used either separately or together.
• The Collections Review Process allows for every object, book or archival deposit in a collection to be
physically checked. It makes this seemingly overwhelming task achievable by dividing the material into
‘Review Units’ on a space-by-space basis rather than by topic or type.
It offers a quick, but thorough, technique for ‘scoring’ and comparing current levels of collections
management, care and documentation with levels and types of access and usage. This provides a
comprehensive, one-off snapshot - an evidence base for forward planning and resource allocation.
The approach encourages the reviewing team to think realistically about material’s potential and
consider how this might be realised.
The Collections Review Process is designed as a one-off exercise. It is best applied over a reasonably
concentrated timeframe by a small team able to dedicate most of its work time to the project. This is
demanding, but facilitates consistency of approach and provides an immediately usable result.
• The Significance Assessment Process supports a museum, library or archive to deepen its
understanding of collections’ meaning and value for a range of potential audiences and for the
organisation itself.
The method provides ‘thinking tools’ that prompt and structure assessment and present its outcomes
in a clear, concise Statement of Significance. It can be used on individual items, themed ‘clusters’ or
other groupings of linked material. The approach can be readily applied to the consideration of sub or
even whole collections. It supports teams from across an organisation to pool their perspectives and
expertise and to bring external experts into the Process if required.
Significance Assessment can provide a critical starting point for decision making on maximising the use
of collections, on acquisition, rationalisation, dispersal or disposal or on selecting material for e.g.
display, digitisation, preservation or access programmes.
In addition to running formal Significance Assessment sessions, once staff members are familiar with
the thought processes involved they will find it easy to apply them on an ad hoc basis to their daily
work.
The Significance Assessment Process is intended to be used on a targeted, ‘as needed’ basis, usually to
underpin a specific assessment objective.
Both the Collections Review and the Significance Assessment tools are intended to be flexible and easily
modified to reflect the user organisation’s specific project objectives, the nature, needs and interests of
their priority user groups and/or the nature of their collections and site.
The two Reviewing Significance processes help museums, libraries and archives to:
• Develop a better understanding of collections and how they might be used
• Identify priority ‘hot spots‘ for intervention
• Create a strong evidence base to inform forward planning, resource allocation and funding bids for
collection management, rationalisation and development
• Establish a narrative for communicating items’ and collections’ importance to governing bodies,
managers and funders
• Identify items and groups of material that have a strong resonance for target audiences
5
3 THE COLLECTIONS REVIEW PROCESS
The idea of checking every item in an organisation’s collections can seem daunting. The Collections Review
Process’s space–by-space approach is designed to make it manageable. The method is partly based on the
‘Collections Review Rubric’ developed and used by University College London’s Museums and Collections
between 2007 and 2009. In those two years a team of just two surveyors and occasional helpers
successfully surveyed 18 collections containing a total of 380,000 objects spread across 190 storage, display
and museum spaces (See Section 5 FURTHER SOURCES AND RESOURCES).
The Collections Review Process generates a robust, easily presented set of data, pinpointing collections’
strengths and weaknesses and flagging up any anomalies that exist between levels of care, management,
documentation/cataloguing and accessibility and actual or potential usage.
The Process uses a score-based system for data analysis. The collection under review is divided into
individual Review Units and these are ‘scored’ against criteria presented in two matrices: the Usage Grid
and the Collections Management Grid. Scores and short notes are logged on the Collections Review
Survey Form. If needed, there is also a Collections Review Notes Form for recording lengthier supporting
information or queries. Data from the Survey Form is entered onto the Collections Review Datasheet for
analysis. This provides an easily presented ‘traffic light’ overview of survey outcomes for all the Review
Units assessed, flagging up priority areas for either immediate or longer term intervention.
The Collections Review Process tools are:
• Usage Grid (colour-coded light purple)
• Collections Management Grid (colour-coded light blue)
• Collections Review Survey Form (colour-coded to match the Grids)
• Collections Review Notes Form (colour-coded to match the Grids)
• Collections Review Datasheet (This is currently supplied as an MsOffice 2010 Excel spreadsheet, with
headings colour-coded to match the Grids and review data highlighted by a ‘traffic-light’ colouring
system).
3.1 GETTING STARTED – PLANNING Careful planning will be required to make the most efficient use of staff time and skills and to ensure that
the Collection Review meets all of your organisation’s objectives.
The Collections Review Process is designed to be conducted fairly intensively, over a reasonably short
period, by a small survey team able to dedicate most of their work time to the project. This ensures
consistency and efficiency as the surveyors develop confidence and speed. Surveyors need to have a basic
understanding of collection care criteria and management procedures. They do not need to be
conservators or subject specialists in the field of the collection, but they will need access to relevant
expertise in these areas.
Depending on the size of the organisation, it will probably be helpful for the project to be overseen by a
steering group not just of collection managers, but also colleagues involved in e.g. exhibition, outreach,
learning, documentation, conservation and marketing. The steering group should identify any additional
external advisers they may wish to invite to support the Process – e.g. specialist subject experts or
conservators.
The steering group will set the parameters for application of the Collections Review Process, agree
objectives and make final decisions on the display and/or storage areas to be reviewed and on the
optimum size of Review Units in each space. They will decide how the project is to be staffed – whether by
allocating blocks of existing staff’s time or by recruiting short term project workers.
6
Whichever approach is taken to staffing for the hands-on survey element of the Collections Review Process,
the organisation’s own collection manager(s) will need to be closely involved in managing and supporting
the project throughout its duration. They need to understand and ‘buy in’ to the Process – they will be the
ones responsible for using, and planning on the basis of, the resulting data once the review is complete.
Throughout the project the survey team will need access to senior colleagues and/or external advisers who
have the expertise to give specific guidance about both material’s preservation requirements and its
current or potential usage. The survey team need to be told what to look out for – and when to ask for
specialist advice. They need to work systematically, noting queries as they go and presenting these to their
specialist advisers in a way that makes efficient use of everybody’s time.
As a first step, small teams of steering group members – including the relevant collection managers - should
pilot the survey methodology with one or more Review Units in each type of store and/or display space and
on each type of collection to be included in the review.
This will help them decide
• whether the generic Collections Review tools provided by Reviewing Significance 3.0 work well as they
stand or whether they need customisation to meet organisational or project objectives
• what level of skills and expertise will be required from any project workers being recruited to
undertake the hands-on survey work
• whether the anticipated timescale for delivering the project is realistic
Choosing the Review Units
The size of Review Units will vary depending on the objectives of the review and the nature of the
collections. For most reviews a sensible Review Unit might be a single shelf- or box-full of material. Teams
conducting a small, tightly focussed review, or one dealing with very large objects, might wish to treat each
individual item as a Review Unit in its own right. Conversely, the system will work well, and still provide
valuable data, when used on a much bigger scale, taking a whole bay of shelving or even a whole storage
room as an individual Review Unit.
In the RCSE Hunterian Museum’s
Instrument Store the ‘Review Unit’ was
a single drawer or shelf
In the RCSE Library’s Erasmus Wilson
Room Gallery the ‘Review Unit’ was a
single shelf
At a museum in the East
Midlands individual vehicles
were assessed as ‘Review Units’
in their own right
7
3.2 THE COLLECTIONS REVIEW GRIDS The Usage and Collections Management Grids present criteria against which each Review Unit is scored.
3.2.1 The Usage Grid Current and potential levels of usage for each Review Unit are scored against criteria presented in columns
on the Grid under the following headings:
Across columns A-D it is important to consider potential as well as current usage:
A AUDIENCE APPEAL / MARKETABILITY
Scores material in the Review Unit for its current and potential popular and media appeal,
highlighting items already featured in displays or in online or hard copy marketing materials,
research and familiarisation resources and/or guides
B SUPPORTS LEARNING
Scores material’s value for supporting formal education and training programmes and/or informal
learning opportunities
C ENQUIRY / RESEARCH
Scores material’s current usage and/or potential to support academic and other research or for use
by staff when answering enquiries
D SIGNIFICANCE
Supports the surveyor to make an indicative summary significance assessment and give a score for
material’s meaning and value for current and potential audiences.
Working through this column will also help identify individual items and/or groups of material that
could benefit from application of the full Significance Assessment Process
Column E looks specifically at current access provision
E CURRENT ACCESS FOR USERS
Scores current levels of access that are provided both to the material itself and to catalogues and
other documentary information about it
Gathering Evidence and Scoring for Usage As part its planning to support the survey team through the Usage element of the review Process, the
project steering group will need to identify any usage monitoring information that is currently gathered and
any other sources that might provide evidence of how the material is currently being used and/or how it
has been used in the past. These might include e.g. records of individual items’ display or loans history;
records of researcher access to the material in the organisation’s search or reading rooms; published
citations; evidence of material having featured in the organisation’s own publications, online blogs,
exhibitions etc.
It will also be important for the survey team to have ready access to all available catalogue information and
supporting documentation about the material under review.
With this information to hand, and by physical examination of the items themselves, the survey team will
be able to assess what is known about the past, current and potential usage of material in the Review Unit
against each set of criteria on the Usage Grid. It will usually be possible to allocate a score between 1 and 5
in each column. ‘5’ represents the lowest level of usage within the criterion and ’1’ the highest.
In some instances the surveyors may feel that they have insufficient evidence about the material’s history
of use or understanding of its potential, so they cannot immediately give it a score. Where more research is
8
needed, it is possible to allocate a zero, ‘0’, score. When this appears on the datasheet it will flag up a need
for further investigation - particularly before any decisions are made about allocating resources for the
material’s preservation or about retention.
A new feature in Reviewing Significance 3.0 is that the score rows 1-5 on both the Usage and the collections
Management Grids are presented in reverse order (5-1). This avoids repetition on the Grid itself and allows
the surveyors to work up through the scores from the lowest score, ‘5’, until they find the set of criteria
that best reflect what they know about the current or potential usage of material in the Review Unit.
3.2.2 The Collections Management Grid The purpose of the Collections Management Grid is to facilitate a time and resource efficient assessment
of the current level of collection management and care being offered to each Review Unit. It will alert the
surveyor to areas for concern and possible intervention, but is not seen as a substitute for more
comprehensive self-assessment tools such as (for UK museums) Benchmarks in Collections Care (See
Section 5 FURTHER SOURCES AND RESOURCES).
Current levels of care and management being applied to each Review Unit are scored against criteria
presented in columns on the Grid under the following headings:
F STORAGE + DISPLAY - SECURITY + EMERGENCY PLANNING
Scores emergency planning and security provision and management for the whole storage or
display space + the security level offered by the individual storage cabinet, rack or display case in
which the Review Unit is housed
G STORAGE + DISPLAY – SPACES + RACKING etc.
Scores provision for safe handling and inspection within the whole storage area + the condition and
the appropriateness for purpose of the storage and display racking, cabinet, display case etc. in
which the Review Unit is housed
H STORAGE + DISPLAY - PACKING + DISPLAY MOUNTS
Scores the appropriateness for purpose of boxes, packing and display materials in which the Review
Unit is housed + reviews current handling practice
I STORAGE + DISPLAY - ENVIRONMENTAL MONITORING + MANAGEMENT
Scores all aspects of environmental monitoring and management + pest control within the whole
storage area
J COLLECTION ITEMS - CONDITION ASSESSMENT
Scores the current condition of material in the Review Unit + reviews current condition monitoring
procedures
K COLLECTION ITEMS – OWNERSHIP
Assesses and scores for the known ownership + copyright status of material within the Review Unit
L CATALOGUING / DOCUMENTATION
Scores the adequacy, completeness + retrievability of catalogue data + supporting documentation
for material within the Review Unit (see USAGE GRID for staff and user access to documentation)
Gathering evidence and scoring for Collections Management Allocating scores 1-5 against each of the criteria on the Collections Management Grid is intended to give an
accurate snapshot of the level of management and care being applied to each Review Unit.
To achieve this, before undertaking the review the survey team will need to have taken appropriate
guidance and agreed on what are deemed appropriate standards of security, care and management for
each type of collection material being reviewed. They will need to have access to any recent environmental
9
monitoring data, where available, and to any existing procedure manuals and condition assessment
guidelines.
As with the Usage Grid, it will also be important for the surveyors to have access to all available catalogue
information and supporting documentation about the material under review.
Informed by these records, and by comparing what can be observed about the of the level of management
and care being applied to each Review Unit against the criteria presented on the Collections Management
Grid, it should always be possible for the survey tem to allocate a score between 1 and 5 for each column.
No zero ‘0’ score - indicating the need for further research - is included on the Collections Management
Grid. ‘5’ represents the lowest level of care and management and ‘1’ the optimum.
As on the Usage Grid, in Reviewing Significance 3.0 the score rows 1-5 are presented in reverse order – with
5 at the top. This avoids repetition on the Grid and allows the surveyor to work up through the scores from
the lowest baseline,‘5’, until they find the set of criteria that best reflects the current level of care and
management that applies to the Review Unit. Surveyors who have tested this reverse configuration of the
Grids say it makes them feel more confident and positive about the scores allocated.
3.2.3 Weighting the scores Reviewing Significance recognises that the scores appropriate for individual items within each Review Unit
are likely to vary. For example, one or two items in a unit might be in poor condition, or less fully
catalogued that the rest, or they might have a much more evident usage value.
Averaging the scores appropriate for individual items within the Review Unit across the whole Unit would
provide bland and unhelpful data. Instead, the Process requires that surveyors ‘weight’ the scores in a way
that will flag up problem areas and/or highlight particularly significant or usable material.
To achieve this, weighting the scores works differently for each of the two Grids:
• When assessing Review Units against the criteria on the Usage Grid it is important for surveyors to
apply the highest score appropriate for any single item within the Review Unit to the whole Unit.
Where appropriate, a short explanation should be dropped into the ‘Brief Notes’ column of the
Collection Review Survey Form. This will locate and highlight individual items that have potential for
new or increased usage. If more detailed information needs to be recorded, this can be captured on the
Collections Review Notes Form.
Example:
The Review Unit in a museum store is a shelf containing 20 objects. Of these, 17 have little potential
for use in ‘B SUPPORTS LEARNING and would score just ‘4’. Three of the items, however, have clear
potential for use in National Curriculum based activities and score ‘2’. Here the whole Review Unit is
given a ‘2’. The notes section of the Collections Review Survey Form is used to flag up the three ‘star’
items, citing their reference numbers. If required, more detailed notes on how the items link to
specific aspects of the Curriculum could be recorded on the Collections Review Notes Form.
• Conversely, when assessing Review Units against criteria on the Collections Management Grid
surveyors should apply the lowest rating that could be applied to any single item within the Review
Unit to the whole Unit. This will flag up items that require attention. Again, an explanation should be
dropped into the ‘Brief Notes’ column and, if necessary, amplified on the Collections Review Notes
Form.
Example:
The Review Unit is a shelf contains 12 books and 11 of these score ‘2’ for ‘J COLLECTION ITEMS -
CONDITION ASSESSMENT’, but one item rates as a ‘5’ and needs urgent conservation attention. Here
10
the whole Review Unit is scored ‘5’ and a note is made on the Collections Review Survey Form (with
more detailed notes, if needed, on the Collections Review Notes Form) to identify the item at risk
and describe the problem.
3.2.4 Customising the Grids As noted above, organisations should always pilot the Collection Review Process and tools in a few sample
Review Units before starting the full survey. Both the Usage and the Collections Management Grids are
intended to be flexible and a few pilot reviews will show whether they need to be customised.
The Usage Grid As with the Significance Assessment Process Grid (see Section 4 below), at an early stage in the project the
steering group need to give thought to identifying the organisation’s priority user groups and then double
checking that the criteria given on the Usage Grid accurately reflect their user groups’ interests and needs.
e.g. a university museum might wish to look especially closely at material’s potential for undergraduate
teaching or academic research.
The Collections Management Grid Piloting the Collections Management Grid to survey sample Review Units across the full range of an
organisation’s storage and display spaces and collection types will highlight any areas where the criteria on
the Collections Management Grid need modification or amplification to ensure that the review scores give
a genuinely accurate and useful picture.
e.g. One organisation that piloted the Collections Management Grid across all its collection types found
that their scores for ‘F STORAGE + DISPLAY - SECURITY + EMERGENCY PLANNING’ were uniformly low for every
storage and display space. This was because they had no effective organisation-wide emergency plan. The
resulting, undifferentiated, low scores obscured any nuanced understanding of important variations
between the spaces in terms of security. Their solution was to modify the Grid by dividing ‘F’ into two
columns on the Grid: ‘F1 STORAGE + DISPLAY – SECURITY’ and ‘F2 EMERGENCY PLANNING’.
3.3 RECORDING AND PRESENTING THE SCORES
3.3.1 The Collections Review Survey and Notes Forms As the survey team work their way through columns A-E on the Usage Grid and F-L on the Collections
Management Grid, they identify the score most appropriate for the Review Unit they are considering and
log each score on a hard-copy Collections Review Survey Form.
This simple, paper-and-pencil based logging system supports the capture of scoring data and notes in-situ
where using a laptop may not always be safe or efficient. To date, most survey teams have opted to use the
paper-based system when working within storerooms and display spaces and then enter their data into the
Collections Review Datasheet back at their desks.
They found that, while not time consuming, this gave them a useful opportunity for checking, reflection
and, where necessary, discussing the scores with collection managers and other colleagues.
However, as devices get smaller and batteries last longer, organisations may wish to experiment with
entering scores and notes straight into the Collections Review Datasheet on a laptop or tablet.
11
The Collections Review Survey Form
The Collections Review Survey form is headed with boxes for entering organisation name, collection name,
building name, room name or number, surveyor name(s), and the date on which the survey was conducted.
Below this there is a table with one row allocated for each Review Unit. Here the surveyor enters an
identifier for the bay, rack, or cabinet in which the Review Unit is housed (an organisations may need to
adapt this terminology to suit its own location identification system); a unique identifier for the Review Unit
itself (this might be e.g. a drawer or shelf number); a total for the number of items included in the Review
Unit; the scores given for each heading on the Usage and Collections Management Grids; and any necessary
brief notes:
The lettering of the Grid headings ‘A-L’ makes it easy to reference notes to the relevant column.
Collections Review Survey Form Organisation name: Collection:
Building: Room: Surveyor(s) Date:
Ba
y /
Ra
ck /
Ca
bin
et
ref
Re
vie
w U
nit
Re
f
Nu
mb
er
of
ite
ms
in
Re
vie
w U
nit
A A
UD
IEN
CE
AP
PE
AL
/
MA
RK
ET
AB
ILIT
Y
B S
UP
PO
RT
S L
EA
RN
ING
C E
NQ
UIR
Y /
RE
SE
AR
CH
D S
IGN
IFIC
AN
CE
E C
UR
RE
NT
AC
CE
SS
FO
R
US
ER
S
F S
TO
RA
GE
+ D
ISP
LAY
–
SE
CU
RIT
Y +
EM
ER
GE
NC
Y
PLA
NN
ING
G
ST
OR
AG
E +
DIS
PLA
Y –
SP
AC
ES
+ R
AC
KIN
G e
tc.
H S
TO
RA
GE
+ D
ISP
LAY
–
PA
CK
ING
+ D
ISP
LAY
MO
UN
TS
I
ST
OR
AG
E +
DIS
PLA
Y –
EN
VIR
ON
ME
NT
AL
MO
NIT
OR
ING
+
J C
OLL
EC
TIO
N I
TE
MS
-
CO
ND
ITIO
N A
SS
ES
SM
EN
T
K C
OLL
EC
TIO
N I
TE
MS
-
OW
NE
RS
HIP
L D
OC
UM
EN
TA
TIO
N /
CA
TA
LOG
UE
Brief Notes
(+use separate Notes Form if needed)
Sometimes more extensive notes are required and a separate Collections Review Notes Form is supplied as
part of the Collections Review Process toolkit. Intended for use only as and when needed, this allows
surveyors to record fuller comments or flag up queries about aspects of the Review Unit as a whole or
about individual items.
3.3.2 The Collections Review Datasheet The Collections Review Datasheet provides for presentation of typed up data from the Collections Review
Survey Form in an easily readable spreadsheet format (designed in Microsoft Office Excel 2010). Automatic
‘traffic-light’ colour coding on the spreadsheet shows the scores as: 5 = red, 4 = orange, 3 = yellow, 2 = pale
green, 1 = bright green, 0 = grey.
(The colour coding is automated using the ‘Conditional Formatting’ function in Excel 2010 – this might need
adapting to comply with your organisation’s current spreadsheet software)
The Datasheet includes a ‘Notes’ column and all notes pencilled on to the Collections Review Survey and
Notes Forms should be typed up or summarised here. This ensures that the notes remain accessible as part
of the permanent record of the Review Process.
It is usually helpful for the surveyors to enter their scores and any related notes into the Collections Review
Datasheet as soon as possible after the review – whilst memories are still fresh. This gives an opportunity
to reflect on the scores given and check for consistency with survey team colleagues as well as to note any
specific queries for referral to collections managers, subject experts or conservators.
It is advisable to retain and file the paper Survey and Notes Forms for later reference – at least as an
interim measure until the whole review project is complete.
12
The figure below shows the first rows of a Collections Review Datasheet completed during the RCSE
pilots: Organisation name: ROYAL COLLEGE OF
SUREGEONS OF ENGLAND
Collection: MICROSCOPE
COLLECTION
Building: RCS
Room: 5TH FLOOR STORE -
S5B
Surveyor(s) SAM ALBERTI &
MARTYN COOKE
Date: 11/08/2014
Ba
y /
Ra
ck /
Ca
bin
et
ref
Re
vie
w U
nit
Re
f
Nu
mb
er
of
ite
ms
in
Re
vie
w U
nit
A A
UD
IEN
CE
AP
PE
AL
/
MA
RK
ET
AB
ILIT
Y
B S
UP
PO
RT
S L
EA
RN
ING
C E
NQ
UIR
Y /
RE
SE
AR
CH
D S
IGN
IFIC
AN
CE
E C
UR
RE
NT
AC
CE
SS
FO
R
US
ER
S
F S
TO
RA
GE
+ D
ISP
LAY
–
SE
CU
RIT
Y +
EM
ER
GE
NC
Y
PLA
NN
ING
G
ST
OR
AG
E +
DIS
PLA
Y –
SP
AC
ES
+ R
AC
KIN
G e
tc.
H S
TO
RA
GE
+ D
ISP
LAY
–
PA
CK
ING
+ D
ISP
LAY
MO
UN
TS
I
ST
OR
AG
E +
DIS
PLA
Y –
EN
VIR
ON
ME
NT
AL
MO
NIT
OR
ING
+
J C
OLL
EC
TIO
N I
TE
MS
-
CO
ND
ITIO
N A
SS
ES
SM
EN
T
K C
OLL
EC
TIO
N I
TE
MS
-
OW
NE
RS
HIP
L D
OC
UM
EN
TA
TIO
N /
CA
TA
LOG
UE
Brief Notes
(+use separate Notes Form if needed)
1/57 x1 50 2 3 1 1 2 3 2 1 3 2 2 2 All slides present
2 x1 250 5 4 4 0 5 3 5 5 3 4 5 4 Approximate count
3.3.3 Making sense of the data By its colour coding, the Collections Review Datasheet provides an instant, easily presented overview of
survey outcomes for all the Review Units assessed.
The 1-5 ‘traffic-light’ colour coding makes it easy to spot anomalies between the assessed current or
potential usage value of material within a Review Unit and the level of care or access which the unit is
currently being given.
e.g. If a Review Unit scores ‘1’ for ‘B SUPPORTS LEARNING’ and ‘2’ for ‘C ENQUIRY + RESEARCH’, but ‘4’ for
both ‘E CURRENT ACCESS FOR USERS' and ‘L DOCUMENTATION / CATALOGUE’ this shows at once that
access to both the material and to information about it can be difficult for public users and researchers –
and even for colleagues within the organisation -– so the material cannot currently be used to its full
potential.
The ‘0’, zero, score from the Usage Grid shows up in grey on the Datasheet – indicating that more research
work needs to be done before any decisions can be made about material in the Review Unit.
As in the RCSE example shown above, the colour system will also highlight any unexpected variations in
management or usage between individual Review Units within the same store – or across a number of
storage and/or display areas.
Of course the ‘traffic light’ is just an indicator. Any apparent anomalies will always need to be considered
within the wider context of an organisation’s priorities, its objectives for each section of its collections and
the appropriate level of care demanded by each collection type.
e.g. if museum objects are part of a defined learning or outreach collection then security, packing and
handling arrangements will reflect that use. Similarly for a library’s current reference or lending stock it
would never be appropriate to apply the tight security criteria that would earn a ‘1’ or ‘2’ – to be usable by
readers the books and journals might have to be on open access and only score a ‘3’ or ‘4’. For the same
library’s rare book collections, however, scoring a ’3’ or ’4’ for security would flag up a problem needing to
be addressed.
Outcomes from using the Collections Review Process include:
• Generating a systematic, quickly achieved and readily communicated overview of collections: what is
where; how material is currently being used and managed; what its potential might be
• Spotting problem areas in collections and information management and highlighting where these apply
to significant material
• Providing objectively generated evidence to support resource allocation and funding bids
• Understanding the impact and/or effectiveness of current access provision and interpretation
• Generating a helpful tool for monitoring and handover - especially when preparing for a change of
collection managers
• Preparing for external inspections and self-assessment reviews
13
• In parts of the UK, providing an evidence base to support development of Arts Council England
Museum Accreditation and Archive Accreditation policy and planning documents (See Section 5
FURTHER SOURCES AND RESOURCES)
• Identifying material that would benefit from closer scrutiny through the Significance Assessment
Process
14
4 THE SIGNIFICANCE ASSESSMENT PROCESS
The Significance Assessment Process uses two analytical ‘thinking tools’ to help the assessment team to
probe the meaning and value of the material being assessed and then develop a concise, clear Statement
of Significance to communicate that meaning in a variety of contexts.
While the Collections Review Process gives a snapshot overview of a whole collections’ use and
management, the Significance Assessment Process is intended to be applied to a single item or a focussed
group of material and used on an ‘as needed’ basis to underpin an identified assessment requirement. That
might be to inform a specific decision making process – such as selecting objects for exhibition – or as part
of a wider collections review.
The standard Significance Assessment Process outlined below is intended to be used by groups of staff –
and sometimes external advisers - coming together for an intensive session. Each session must have a clear,
achievable purpose and this needs to be communicated to all participants.
Participants in a Significance Assessment session will very often come from a mix of disciplines across the
organisation. Pooling the different perspectives, expertise and knowledge of staff and volunteers from e.g.
curatorial, conservation, learning, outreach, marketing and front of house departments will help inform the
Process and make it useful to the whole organisation.
It will often be helpful and illuminating to add external advisers to the mix. External participants might be
e.g. subject experts, specialist conservators, current or potential users. A Significance Assessment session
can prove an excellent opportunity for developing closer ties with professional colleagues from beyond the
organisation or involving knowledgeable supporters (e.g. trustees) in a hands-on, collection-focussed task.
Whether or not external participants are involved, a team new to the Process is likely to need half an hour
or 45 minutes to assess even single items or a small groupings of material. Larger groupings or clusters will
take an hour or so. After that, as the team become familiar with the Process, sessions will probably speed
up considerably.
Following the session, one or more staff members will need to allocate time for recording and
communicating the outcomes. It is important for this record to be retained and retrievable for future as
well as short term use.
If an organisation plans to carry out a large number of assessments, it will find it relatively easy to
streamline application of the Process. Once a few full team assessments have been made and a group ethos
has been established, it should be possible for smaller groups or even an individual assessor to run through
the whole Process quite quickly and bring a draft Statements of Significance back to the full assessment
team for discussion and sign off. If taking this approach, to ensure consistency it will be advisable to
nominate one person from the team to oversee application of the Process and finalise all the draft
Statements of Significance.
As the analytical thinking processes used in Significance Assessment become familiar and embedded in an
organisation, the approach lends itself to even more flexible application. At the Royal College of Surgeons,
staff anticipated being able to apply the approach regularly to their daily work, either working alone or in
small informal groups - e.g. when sifting and preparing evidence about the value and meaning of an item
being considered for acquisition or disposal.
Using the Significance Assessment approach in a more loosely constructed, workshop type session offers
great opportunities for profile-raising and communicating collections’ rich potential to colleagues or wider
networks. A sample workshop plan is given in Appendix 2.
15
The Significance Assessment Process supports the assessment team to:
• Analyse what the organisation already knows (and doesn’t know) about the material under review
• Pool thinking on what that knowledge means about the material’s meaning and value for the
organisation and its target audience groups
• Distil the team’s responses and revealed information into a succinct, convincing Statement of
Significance for both immediate and future use
• Note decisions and outcomes of the Assessment – and immediate action points
• Identify areas and contacts for further research
4.1 GETTING STARTED As with the Collections Review Process, if an organisation is planning to use the Significance Assessment
Process for the first time - or in new areas of its collection - it would be helpful for the planning stages of
the project to be overseen by a steering group representing a range of interests and expertise.
Where the assessments will cover a variety of collection types the group should agree on shared objectives
and decide how to achieve consistency. If there is an aspiration to embed the Significance Assessment
Process as part of the organisation’s on-going approach to its work with collections, the group will need to
plan a communications strategy – deciding how colleagues not involved in the initial project are to be made
aware of the assessment methodology, the sessions and their outcomes.
Consideration should also be given to how the assessment outcomes will be made accessible to future staff
and users. The Collection Trust’s SPECTRUM 5 collection management standard recommends making
assessment reports retrievable through an organisation’s collection catalogue (See Section 5 FURTHER
SOURCES AND RESOURCES).
As with the Collections Review Process it would be useful for the group to run one or more pilot sessions to
test the Significance Assessment Process and decide whether any customisation is needed to ensure that
the Process tools, especially the Significance Grid, reflect the organisation’s needs and objectives. For
example, it will usually be necessary for an organisation to check that the target user groups cited on the
standard Significance Grid accurately reflect its own priority audiences.
Running a successful Significance Assessment session demands well planned input of staff time, skills and
expertise. It can be helpful to allocate staff roles early on in the planning stage for each session. At RCSE the
project team found they needed people to take responsibility for ‘owning’ each session, for administration,
pre-session research, locating and managing the material to be included, chairing or facilitating the session,
recording the outcomes and drafting the statement of significance. A full guidance note on these roles and
on planning for and running a session is given at Appendix 1.
Selecting material for assessment While the Collection Review Process works through a whole collection by using location-based ‘Review
Units’, the Significance Assessment Process is much more targeted. Selection of the material to be included
will depend entirely on the agreed purpose and scope of the assessment session.
Assessments might cover e.g.
• A single item – perhaps when considering whether to acquire it by donation or purchase, recommend it
for disposal, make it the subject of a fundraising bid for conservation etc.
• A group of material connected by its provenance. This might be a single accession group or deposited
collection - e.g. when seeking a grant for documentation or conservation
• A group of material of a particular physical type
• A wider, themed cluster of material that relates to a particular topic and/or period
Assessors need to be able to inspect and, where appropriate, handle the material and to have access to all
available catalogue records and any other supporting documentation. They have to consider what can be
16
learnt from direct examination of the objects, books and/or records themselves and what from catalogue
descriptions, contextual paperwork, related archives, objects or published material elsewhere in the
collections.
That said, it is important not to overwhelm assessment session with too much material. The team will need
time to focus on and discuss each of the individual items presented.
4.2 THE SIGNIFICANCE ASSESSMENT TOOLS Unlike the Collections Review Process, the Significance Assessment Process does not allocate scores against
set criteria. Instead, it uses two ‘thinking tools’ to help assessors probe and then express the meaning and
value of the objects, books or archival records being assessed.
The two Significance Assessment tools are:
• Significance Assessment Grid (colour coded green)
• Statement of Significance Template (colour coded green)
4.2.1 The Significance Assessment Grid The Significance Grid is a matrix consisting of a structured series of prompt questions grouped under six
column headings (A-F) and considered from five different user perspectives (Rows 1-5).
While some of the questions could be given a ‘yes/no’ answer, this is not a tick-box exercise. The questions
are designed to prompt the assessment team to explore the full implications of each answer and consider:
• what the organisation knows about the material
• what that means in terms of its significance under various criteria and to various audiences
• what further research is needed to arrive at a better understanding of its meaning and potential value
The Column Headings
A PROVENANCE/ACQUISITION
The questions here prompt consideration of how, why and by whom material was originally
collected or created; its chain of ownership before coming in to the organisation; how and when it
was acquired by the organisation; whether the original owner/collector/creator or the chain of
ownership is significant – either to the organisation’s own history or because it is relevant to the
interests of its target audiences.
B RARITY/UNIQUENESS
This explores the uniqueness or rarity of the material; whether it represents the only, an unusual or
a good/typical example of its type in the organisation’s and/or comparable collections.
C SENSORY / VISUAL QUALITY /EMOTIONAL IMPACT
This assesses whether the material demonstrates technical and/or creative accomplishment;
whether it has potential to have a strong visual, sensory or emotional impact; whether its design,
style of presentation or use of language were innovative or influential.
D CONDITION / COMPLETENESS
This assesses the material’s current condition - including any interventive conservation work done -
and explores how that impacts on its meaning and usability. The questions prompt consideration of
how any incompleteness, damage, restoration, annotations etc. might reveal (or obscure) useful
information about the material’s history of use.
E HISTORICAL/CULTURAL MEANING
17
By exploring what associations the material has with any particular period, event, activity,
institution or person, the questions prompt consideration of its historical or cultural significance for
the organisation and its target audiences.
F EXPLOITABILITY
This prompts consideration of the extent to which the material could be used as a resource for
marketing, profile raising or income generation.
The Rows Row 1 on the Grid prompts the assessors to consider headline ‘key points’ of relevance to the wide
‘general’ audience. Rows 2-5 focus on the needs and interests of four defined user groups. The numbering
of the rows provides a Grid reference for easy reference – it does not represent a score. The sequence of
the rows is not intended to be read as hierarchical: ‘national/international’ significance is not classed as
having any greater weight than ‘local/regional’, ‘community’ or ‘organisational’. The weight attached to
each of the categories will differ between organisations and projects.
1 GENERAL / KEY POINTS
Prompts consideration of headline key points that impact on the meaning, value and usage
potential of the material for the organisation and all its audiences.
2 NATIONAL/INTERNATIONAL
Prompts consideration of whether the material is of outstanding international interest, quality or
research potential: perhaps including supreme examples of the type in the UK, or associated with
nationally or internationally known events, themes, movements or people.
3 LOCAL /REGIONAL
Prompts consideration of material’s particular value, relevance, interest and accessibility for a
local/regional audience, or for people with an interest in the locality or region and its history. The
questions encourage assessors to consider whether material might be significant to regional
identity and sense of place – and perhaps to regionally determined social, economic and cultural
objectives and community cohesion.
When planning for the assessment, it will be important for the team to consider and agree
on what constitutes the museum’s locality or region. This might simply be defined by local
government boundaries or be a recognised geographical area. It might be a more loosely
defined catchment area for potential repeat visitors who regard the organisation as
representing their area or meeting their specific cultural needs.
4 COMMUNITY/GROUP
Prompts consideration of whether material might have especial significance for a particular group
or section of the community. The term ‘community/group’ is used here to include a diversity of
groups and group types. Communities might have strong, current local representation or be
dispersed more widely across the region and/or beyond.
When planning the assessment the team should consider and agree whether they wish to
prioritise the representation or needs of any particular target group e.g.:
• Subject experts with specialist knowledge or expertise – this might extend to partner
organisations or professional groups with a relevant specialist subject interest
• People defined, or who self-define, because of their ethnicity, faith, gender identity,
sexuality, mental or physical health, levels of wealth and poverty, social class etc.
• People defined, or who self-define, because of their working or life experience - e.g. a
particular migrant workforce, former workers at a particular site or trade, survivors of a
traumatic event etc.
18
5 ORGANISATIONALLY OR SITE SPECIFIC
Prompts consideration of the material’s particular relevance and meaning within the context of the
organisation’s own history, its wider collections and collecting policy, its buildings or its immediate
environs and also of the material’s potential interest and use value for colleagues. Many
organisations will also need to consider the interests of a parent body - e.g. local authority,
university, charitable trust, commercial organisation.
4.2.2 Customising the Significance Grid The Column headings on the Grid are designed to be universally applicable, but some organisations may
need to modify them to reflect very specific collection types, or users with very specific interests that need
to be accommodated.
e.g. at the Royal College of Surgeons (RCSE) the assessment team modified the column headings on the
Grid. They annotated ‘E HISTORICAL / CULTURAL MEANING’ with a note to specify ‘including medical
history’ and inserted an additional column for ‘Current Scientific Relevance’.
Teams will also want to look closely at the audience groups specified in each row of the Grid. Many
organisations will have existing mission statements or business plans that specify their priority target
audiences. Usually, to ensure its relevance to the organisation as a whole, the Significance Assessment Grid
and Process should reflect any pre-existing organisation-wide audience prioritisation. Alternatively the
stated objectives of the assessment may demand that the Grid be modified to reflect the interests of a
specific subset of users. In either case, the prompt questions in each row will need to be modified
accordingly.
At RCSE the Significance Assessment team changed the defined user groups on the grid to:
1 General
2 Medical/ Dental /Veterinary (including medical education)
3 Non-medical (including formal + informal learning +commercial users)
4 National / International
5 London
6 RCSE organisation or site specific
At London’s Southbank Centre – where the tools were used more informally – the team chose to
highlight the following interest and user groups:
1 General
2 Southbank Centre’s own management and staff (to support their work and as ‘corporate
memory’)
3 National and international: (including performers, audiences, visitors, supporters, funders)
4 Lambeth; South London; London: (including residents, performers, supporters, participants,
Audiences, visitors, funders)
5 Specific communities of interest: (including students/academics/professionals, architecture,
design, social and political history, music, dance, performance, art, arts management, exhibition
history and design, schoolchildren and young people, artists and performers)
4.2.2 The Statement of Significance Template The Statement of Significance Template is designed to capture the assessment team’s responses to the
prompt questions on the Significance Assessment Grid and build a clear, easily communicated Statement
that summarises all the thinking done and evidence considered.
As well as helping to generate the Statement, the Template provides a valuable record of the detail and
outcomes of the session and should be retained for future reference.
19
There is a section at the top of the Template for entering details about the material being assessed, the
assessment team, the purpose of the assessment and the date and place. Below that, the Template
provides six tables, one each for the six columns on the Significance Grid:
A PROVENANCE/ACQUISITION
B RARITY/UNIQUENESS
C SENSORY / VISUAL QUALITY /EMOTIONAL IMPACT
D CONDITION / COMPLETENESS
E HISTORICAL/CULTURAL MEANING
F EXPLOITABILITY
In each table there are five rows, one for each of the rows on the Grid
1 GENERAL / KEY POINTS
2 NATIONAL/INTERNATIONAL
3 LOCAL /REGIONAL
4 COMMUNITY/GROUP
5 ORGANISATIONALLY OR SITE SPECIFIC
This format allows for quick bullet pointing of the comments and observations made by the assessment
team as they work down the each of the columns A-F.
At the bottom of each table on the Template, there is space for drawing together a concise Assessment
Summary capturing the essence of the bullet points above. These summaries form the basis of the final
Statement of Significance.
After the tables there is a box for drafting the STATEMENT OF SIGNIFICANCE. The Statement should
express the team’s understanding of the assessed material’s current and potential meaning and value to
the organisation and its identified audiences.
Then three further boxes:
FURTHER RESEARCH AND CONSULTATION
OUTCOMES OF THE ASSESSMENT
ADDITIONAL NOTES
20
COLUMN ‘A’ FROM THE SIGNIFICANCE GRID
A
PROVENANCE / ACQUISITION
1 G
EN
ER
AL
/ K
EY
PO
INT
S
• Do we know how/when/why/from whom it
was acquired by our institution?
• Who created, collected, made, wrote,
published, owned or used it? Is there
evidence?
• When/where/why/ for whom was it
produced / collected? Is there evidence?
• Is there a known chain of ownership and
use?
• Is it unusually well-provenanced /
documented for its class or type?
2 N
AT
ION
AL
/
INT
ER
NA
TIO
NA
L • Does its provenance connect it to any event,
person, place or theme of national /
international significance?
• Is its creator / collector/ creator of national /
international significance?
3 L
OC
AL
/
RE
GIO
NA
L
• Does the provenance connect it to any
event, person, place or theme of local
/regional significance?
• Is the creator / collector/ creator of
local/regional significance?
4 C
OM
MU
NIT
Y
OR
GR
OU
P
• Does the provenance connect it to any
event, person, place or theme of relevance
to a specific community or group?
• Is the artist, writer, designer, creator
important to a specific community or group?
5 O
RG
AN
ISA
TIO
N /
SIT
E
SP
EC
IFIC
• Does the provenance connect it to any
event, person, place or theme relevant to
our institution’s history, site, building or
location?
• Is the creator / collector/ creator relevant to
our institution’s history, site, building or
location?
• Was the acquisition of the object/collection
part of a defined collecting plan?
• Was the object/collection acquired with
external support or funding?
TABLE ‘A’ FROM THE STATEMENT OF
SIGNIFICANCE TEMPLATE
Table for capturing responses from rows 1-5 in
column ‘A PROVENANCE/ACQUISITION’: on the
Significance Grid:
A PROVENANCE / ACQUISITION
1 GENERAL / KEY
POINTS
A1 •
•
2 NATIONAL /
INTERNATIONAL
A2 •
•
3 LOCAL /
REGIONAL
A3 •
•
4 COMMUNITY
OR GROUP
A4 •
•
5 ORGANISATION
/ SITE SPECIFIC
A5 •
•
ASSESSMENT
SUMMARY
4.3 USING THE GRID AND TEMPLATE TO GENERATE A STATEMENT OF
SIGNIFICANCE
4.3.1 Capturing assessors’ responses on the Template A suggested outline of the forward planning and allocation of tasks that will help make a Significance
Assessment Session successful is given in Appendix 1. It is necessary for one member of the team to act as
‘scribe’ – taking notes during the session and writing up the full Statement of Significance Template after it.
Audio recording the session makes this task much easier. It enables the ‘scribe’ to play more than just a
note-taking role in the session itself and to pick up nuanced detail of the discussion by referring back to the
audio record when doing the write-up.
21
There is space in each table of the Template to capture the assessment team’s responses to the prompt
questions on the Grid in as many bullet points as necessary. Even though many of the prompt questions on
the Grid could have ‘yes/no’ answers, this is not a ‘tick box’ exercise. At every point on the Grid it is
important to assess and capture the implications of the answers and of assessors’ comments and
observations. Responses and information that argues against material’s significance needs to be recorded
as well as positives and it is important to capture dissenting views as well as points on which there is
consensus.
Each table on the Statement of Significance Template ends with an Assessment Summary box. The aim is
to draft a paragraph or two that reviews and summarises all the bullet-pointed comments. The fine honing
of this will usually be done after the session, but making a few notes in the Assessment Summary box as the
team works its way through the Process will help the scribe to weight the importance of the bullet points
appropriately and capture the essence of the team’s thinking while it is still fresh. This is vital if the session
is not being audio recorded.
The Statement of Significance
When all the assessment team’s input has been captured on the Template and the assessment summaries
for each column on the Significance Grid have been written, the scribe will be ready to create a first draft of
the Statement of Significance.
The Assessment Summaries for each column should be dropped into the Statement of Significance box of
the Template. This will provide a basic outline for the Statement. It should then be a simple job to edit the
text into a final version, taking out any duplication and making sure that key points are emphasised and
expanded as necessary. It may be helpful at this stage for the scribe to do basic follow-up research to
clarify points raised and check references, name spelling etc. The statement should then be circulated for
comment/sign off by all the participants in the relevant Significance Assessment session.
The Statement of Significance should be written to stand alone, ready for presentation to colleagues,
partners or funders and to inform future generations of staff. Organisations should consider allocating a
field in their collection documentation system to hold or provide links to Statements of Significance as they
are developed. This would reflect the recommendation made by Collections Trust’s SPECTRUM 5 (See
Section 5 FURTHER SOURCES AND RESOURCES). It will also be useful to retain completed Statement of
Significance Templates on file as a record of the assessment process.
In some instances it will be appropriate to make the Statements of Significance very public – e.g. when
fundraising for conservation.
Further research and consultation Working through the Significance Assessment Process will alert the assessment team to opportunities for
further research and consultation. The FURTHER RESEARCH AND CONSULTATION box at the end of the
Statement of Significance Template is for capturing ideas about potential contacts or lines of investigation.
As more research is done the original Statement of Significance might need to be re-visited and updated to
reflect new information and understanding.
Outcomes of the Assessment This box provides the opportunity to
• Record - and where possible allocate responsibility for - immediate action points agreed during the session
• Record ideas and objectives suggested for future follow up
Additional notes This box is for recording any further general notes and findings not covered by above.
22
4.4 SCORING FOR SIGNIFICANCE The Reviewing Significance Significance Assessment Process does not seek to score or rank individual items’
or collections’ significance. The premise is that the material will have different levels of value, meaning and
usefulness for different audiences and purposes. Although numbered, the sequence of the rows on the
Significance Grid is not intended to be read as hierarchical. ‘National/international’ significance is certainly
not seen as having any greater importance than ‘local’, ‘community’ or ‘organisational’. The weight that
assessment teams choose to attach to each of the categories will depend on the stated objectives of each
particular assessment and will differ between organisations and between projects.
However, a team using the Significance Assessment Process to help sift and select material for use in a
short-term project - e.g. a digitisation, outreach or display programme - might wish to develop their own
scoring system based on the project’s strategic priorities. Any scoring or ranking system developed for
more general, long-term use, or to inform irreversible decisions – e.g. around disposal - would need to be
not only transparent, but ‘future-proofed’ – taking account of any potential future changes in
organisational or user interests and priorities.
The Netherlands Cultural Heritage Agency has developed a ‘collection valuation’ system which is partly
informed by Reviewing Significance, but which does use a scoring system. Dutch and English language
versions are available for download (see Section 5 FURTHER SOURCES AND RESOURCES).
Outcomes from using the Significance Assessment Process include:
• A better understanding of the meaning and current or potential value of museum, archive or library
collections to their holding organisation and its users
• Clear, evidence based narratives for communicating the significance and public value of individual
items or groups of material to governing bodies, managers, colleagues and potential funders
Embedding the Significance Assessment approach supports staff in their routine work by:
• Providing a systematic, group based approach to selecting material and developing key interpretation
themes for new displays, temporary exhibitions, education or outreach programmes, web based
resources etc.
• Supporting rational, evidence-based decision making when considering material for acquisition,
transfer or disposal
• As a succession-planning/knowledge-transfer tool - capturing staff and external expertise and
understanding of the collections for immediate and long-term use
• Providing an evidence base to inform and justify resource investment - e.g. in conservation, cataloguing
or digitisation - especially when undertaken in conjunction with the Collections Review Process
• Giving a better understanding of how collection elements can complement and illuminate each other –
especially when working across museum, archive and library collections within a single organisation
• Identifying knowledge gaps where further research and/or external input is required
• Running a Significance Assessment workshop to introduce the collections to existing colleagues / new
staff / volunteers / external supporters or partners and fire up their enthusiasm
• Developing the case for recognition of a collection’s significance - e.g. under the Arts Council England’s
Designation Scheme (see Section 5 FURTHER SOURCES AND RESOURCES).
• Developing the case for UK Heritage Lottery Fund grants - e.g. HLF Heritage Grants require applicants to
‘consider in detail why your heritage is important, and to whom’ – and to present your case as part of
the application process (see Section 5 FURTHER SOURCES AND RESOURCES).
23
5 FURTHER SOURCES AND RESOURCES
All the hyperlinks given here are correct at the time of writing – January 2018
Reviewing Significance 2.0
Version 2.0 of Reviewing Significance was published by Caroline Reed on the Collections Trust website in
2012. This was an updated version of the original Reviewing Significance: a framework for assessing
museum collections’ significance, management and use created for Renaissance East Midlands in 2010.
See: http://collectionstrust.org.uk/resource/reviewing-significance-2-0/
The Screen Heritage UK Collection Assessment Toolkit
Funded by Film London and London’s Screen Archives and developed by Caroline Reed in partnership with
film archivist David Cleveland. An introduction to the toolkit is included in Reed and Cleveland’s The Screen
Heritage UK Moving Image Collections Handbook: a guide for non-specialist archivists working with film and
video collections currently downloadable from the UK Collections Trust website.
See: http://collectionstrust.org.uk/resource/screen-heritage-uk-moving-image-collection-assessment-toolkit/
Further information can be obtained from London’s Screen Archives, care of Film London
See: www.filmlondon.org.uk
University College London’s Collections Review Toolkit
The Reviewing Significance Collections Review tools were inspired by UCL’s Collections Review Rubric, a
survey methodology developed and applied by UCL’s Museums and Collections department in 2007-2009.
See: http://collectionstrust.org.uk/resource/ucl-collections-review-toolkit/
Assessing Museum Collection: collection valuation in six steps (Op de museale weegschaal)
Cultural Heritage Agency, Amersfoort, 2014
An English language version is available of the Dutch Cultural Heritage Agency’s six step valuation
methodology published in 2013. The approach references Reviewing Significance 2.0., but includes a
method for assigning and justifying ‘value scores’.
See: https://cultureelerfgoed.nl/publicaties/op-de-museale-weegschaal-collectiewaardering-in-zes-stappen
Collections Council of Australia’s Significance
The Reviewing Significance Significance Assessment criteria were initially inspired by the Collections Council
of Australia’s Significance: a guide to assessing the significance of cultural heritage objects and collections
See: https://www.arts.gov.au/sites/g/files/net1761/f/significance-2.0.pdf
------------------------------------------------------------
Arts Council England Archive Service Accreditation
Managed by The National Archives, the standard defines good practice and identifies agreed standards,
thereby encouraging and supporting development. The standard is aligned with other relevant quality
assurance schemes, improvement tools and data gathering processes.
It is aimed at institutions that hold archive collections, whatever their constitution, and covers both private
and public sector archives. It enables archive services to review and develop their policies, plans and
procedures against a UK wide standard which has been developed by the archives sector, identifying
strengths of the archive service and providing a framework to improve areas of weakness.
See: http://www.nationalarchives.gov.uk/archives-sector/archive-service-accreditation.htm
Arts Council England Museum Accreditation
The Museum Accreditation Scheme sets out nationally-agreed standards, which inspire the confidence of
the public and funding and governing bodies. It enables museums to assess their current performance, as
well as supporting them to plan and develop their services.
See: http://www.artscouncil.org.uk/supporting-museums/accreditation-scheme-0
24
Arts Council England’s Designation
The Designation Scheme identifies and celebrates outstanding collections held in museums, libraries and
archives across England. The founding aims were to raise the profile of these vital collections and
encourage everyone to safeguard them.
See: http://www.artscouncil.org.uk/supporting-collections-and-archives/designation-scheme
Benchmarks in Collections Care 2.0
Benchmarks in Collection Care is a self-assessment checklist, which sets out clear and realistic benchmarks
for the care of collections. Benchmarks is a management tool which should be used as part of an
organisation’s planning cycle to assess and plan collections care activity and measure progress against
those plans.
See: http://collectionstrust.org.uk/resource/benchmarks-in-collections-care-2-0/
Heritage Lottery Fund
HLF’s Heritage Grants are designed to support organisations seeking to rescue historic buildings, breathe
new life into a collection or record people’s stories.
See: www.hlf.org.uk
Revisiting Collections
Revisiting Collections offers a structured framework to help museums and archives to open up their
collections for scrutiny and to build and share a new understanding of objects’ and records’ multi-layered
meaning for diverse audiences. It provides tools to support capture of external users’ responses to
collections in documentation systems.
See: http://collectionstrust.org.uk/resource/revisiting-museum-collections/
SPECTRUM
Collection Trust’s SPECTRUM is the UK’s collection management standard and is also used around the
world. SPECTRUM 5, published in September 2017, includes a new procedure for approaching collections
review.
See: http://collectionstrust.org.uk/spectrum/
25
APPENDICES
1 Planning and managing a successful Significance Assessment Session
2 Significance Assessment workshop – case study
26
Appendix 1 PLANNING AND MANAGING A SUCCESSFUL SIGNIFICANCE ASSESSMENT SESSION
Running a successful Significance Assessment session demands well planned input of staff time, skills and
expertise. It is vital that all Assessments have a readily explained focus and purpose and are effectively and
efficiently run – making the absolute most of all participants’ time and knowledge and delivering against
clearly defined objectives.
Inviting external expert advisers to give their time to the process adds an additional layer of accountability.
External participants will almost certainly enjoy the session, but they also need to have confidence that
their time is being well spent and that something useful and usable has been achieved. They should be
offered travel expenses and refreshments. It may be necessary to consider offering fees e.g. to freelance
subject experts. Participants will need feedback after the session and, if they enjoyed the experience, might
very well wish to continue offering support.
Roles It will be helpful to allocate staff roles early in the planning process. At the Royal College of Surgeons the
project team found they needed people to take responsibility for: ‘owning’ the session; administration; pre-
session research; locating and managing the material to be included; chairing/facilitating the session; and
acting as ‘scribe’. Depending on the size and scope of the assessment, it will often be possible for one
person to take on more than one of these roles – or for one of the roles to be divided between two or more
staff members.
Owner
This is the key organisational/project management role for the Assessment and it demands both subject
expertise and direct knowledge of the material to be assessed. The Owner will take leadership on:
• Identifying the purpose and setting clear objectives for the Significance Assessment session;
• Deciding on the required length and format of the session (see below) and the space requirements
necessary to ensure access for inspection and safe handling of material (+ a separate refreshment area
if needed)
• Allocating responsibility and monitoring colleagues’ delivery of the supporting roles outlined below
• Working with Collection Managers and others to identify potential material and make the final
selection for inclusion in the assessment
• Early identification of both internal and external assessors. The Owner will need to consider the range
of expertise and subject knowledge that is necessary to meeting the session objectives.
o Internally, as well as Collection Managers this might include staff with conservation, learning
and outreach expertise and staff from other departments of the museum/archive/library’s
parent body.
o Externally it might include academic, professional and other subject experts and/or
representatives who understand the needs and interests of potential user groups.
o It could be useful to invite trustees or other supporters with a particular interest in the material
or subject matter of the session.
• Timely invitation of all participant assessors. Invitations to include information about time, place, how
the Significance Assessment Process works, the specific purpose of the session, the likely material to be
included and, where appropriate, whether fees will be paid for the participants’ time or expenses
covered
• Agreeing with colleagues on the approach to be taken to ensuring that outcomes of the session are
recorded and made accessible for immediate and long-term use
Administrator
Under the guidance of the Owner the Administrator will:
• Book a suitable room and any refreshments
27
• Set up recording equipment and manage the audio recording. It may be useful to use at least two
digital recorders – both to ensure that all areas of the room are covered and to provide back-up
• Maintain communication with all participants invited to the session including: inviting internal staff;
circulating the session programme; emailing links to any available online catalogue and/or other
information at least one week prior to the session; circulating the scribe’s first draft Statement of
Significance Template after the session and ensuring that all responses are forwarded to the scribe by
an agreed deadline; circulating the final Template and Statement of Significance to all participants
Pre-session Researcher(s)
• Carrying out research on the topic and relevant material for consideration during the session under the
guidance of and in collaboration with the Owner and collection managers. Even when the material to
be considered is a single item or a group from just one section of the organisation’s collections, the pre-
session research phase should include discussion with staff managing other parts of the collections to
ensure that information about relevant contextual material is made available to the assessors - even if
this contextual material is not to be included in the actual session. This might include material on
display.
• Working with the Owner to ensure that the material selected for the session is relevant to the purpose
of the assessment session. There should not be too much – it is important that the assessors have
enough time to look closely at all the material and its supporting documentation.
• The Pre-session Researcher needs to work with collection managers to ensure that that all relevant
supporting information is available prior to and at the session, including catalogue or other descriptive
information and any documentation held about provenance and acquisition. Where possible, the
Researcher should provide the Administrator with links to online catalogue and any other supporting
information for emailing to all participants at least one week prior to the session.
• Pre-session inspection of the material to identify any intrinsic information that might be of especial
interest during the session – e.g. labelling, damage, annotations to published works etc.
Content Support This is a practical collections management role that involves:
• Physically locating the material identified for possible inclusion in the session
• Checking that all material to be used is in a stable condition for handling or viewing
• Retrieving the material before the session and putting it away again afterwards
• Working with collection managers to ensure that appropriate handling guidelines are clearly given to
internal and external participants at the start of the session
• Laying out the material to ensure that it is safely presented and readily visible during the session
• Providing gloves, book supports, pencils etc. for assessors to use if required
• Providing handling guidance especially for external advisors with little or no experience of handling
objects, archives or rare books.
• Invigilating handling during the session. If the collection manager responsible for the material has
undertaken the Content Support role, but wishes to take an active role in the session itself, it would be
sensible for them to delegate this invigilation role to another staff member.
Chair / Facilitator
This might well be the Owner of the session. The Chair needs to ensure that they are very familiar with the
Significance Assessment Process and have the confidence to manage input from both colleagues and
external advisers. Their key roles on the day are:
• Facilitating delivery of the whole session
• Keeping the Process focussed and running to time
• Ensuring that participants fully understand and (as far as possible) follow the methodology
• Chairing and steering the formal discussion part of the session. Often this will need to be handled with
some discretion. The aim is to keep discussion structured and focussed around the prompt questions in
each ‘box’ on the Significance Grid, whilst not stifling off-track, but still valuable, contributions.
28
• Without being overly rigid, the Chair needs to ensure that: all the key prompt questions get addressed;
that issues raised during the first part of the session - when assessors are looking at and handling
material - get brought back to main forum discussion; that each participant is given adequate
opportunity to contribute.
• The Chair/Facilitator will need to bear the needs of the Scribe in mind and ensure that there are no side
conversations during the main forum discussions and that that each speaker can be clearly heard (and
audio recorded).
• It may be necessary for the Chair to ensure that the Assessment does not turn into a Q&A session
dominated by the external advisers.
Scribe This is a very responsible role. The person needs to familiar with the Significance Assessment Process and
to understand how the organisation might want to make use of the Statement of Significance both
immediately and in the future.
During the session, the role involves:
• Note taking to supplement/annotate the audio recording of the discussion
• Asking for clarification or additional information where necessary during the discussion
Recording the session makes it much easier for the Scribe to go over and capture the subtleties of the
discussion. It also frees them to take a more active part in the session.
After the session the role requires:
• Analysing and summarising the audio record/written notes of the discussion using the structured
format provided by the Statement of Significance Template
• Undertaking basic follow up research to clarify factual information referred to during the discussion e.g.
proper names, dates
• Noting agreed decisions, action points, recommendations and learning outcomes from the session
• Noting further research or consultation requirements
• Preparing a 1st draft of the Template + Statement of Significance for the Administrator to circulate for
assessor comment response
• Receiving all comment and producing a final Statement of Significance
Suggested format for the session The overall length, format and location of each session has to depend on the scope, complexity and
physical nature of the material being assessed – and on the number of people involved in the assessment.
Space
Most sessions involving a group of assessors will be held in a dedicated, contained space such as a meeting
or education room with plenty of space for laying out the material to be assessed.
It is important to ensure that there is:
• Safe access for moving fragile of vulnerable material into the room
• Adequate table or other flat surface space for laying out all the material
• Adequate circulation space to allow assessors to view and, where safe, handle the material
• Comfortable and adequate space for a round table conversation involving all participants
• A separate refreshment area if required
• Ideally, hand-washing facilities in or very near the room or alcohol gel for cleaning hands + paper
towels and a bin
Of course, there will be occasions when the assessors have to look at material in situ in its store or on
display. At RCSE one session – with just three assessors – was held in a bay of the Library book stack.
Another included museum objects on display as part of the group of material to be assessed.
29
Timing
The Owner and Chair/Facilitator should always have a clear timetable in mind. The length of time needed
for the assessment will depend on the nature and complexity of the material - and the number of assessors
participating.
The following outline session plan might be useful:
Chair Welcome and introductions
Owner Introduction to the topic, the purpose of the session and desired short-term/long-
term outcomes
Owner and/or Pre-
session Research +
collection managers
A short verbal introduction to the material to be assessed during the session and
to supporting documentation and other information. Comment on any other
related material in the collections. Comment on any handling issues.
Owner and/or Pre-
session Researcher +
collection managers +
Content Support + all
All participants to spend a minimum of 30 minutes viewing and, where possible,
handling material. Pre-session Researcher(s) should be on hand, pointing out key
features of the material and answering queries – where possible.
Content Support to invigilate and ensure material is handled safely.
As part of the introduction to this session, the Chair needs to remind participants
to try and ‘hold’ thoughts and information triggered by their exploration of the
material and to bring these back to the table to be shared during discussion
forum.
Chair + all Round table forum discussion managed by the Chair and structured around the
prompt questions on the Significance Grid. It is recommended that this should be
paced to run for no more than 1 -1½ hours.
Chair + Owner Summary, including next steps, and thanks
Planning checklists Before the session:
Timing Task Lead
1-2 months
before session
Purpose and objectives of the session clarified Owner
Staff participants identified + invited + roles assigned Owner/Administrator
Length/format of session decided Owner
Room booked Administrator
External advisers invited/recruited Owner
1 month – 2
weeks before
session
Final selection of material for assessment made + material
located and inspected
Owner/Pre-session
Researcher/collection
manager(s)
Contextual information/documentation researched and noted Pre-session Researcher
1-2 weeks
before session
Session programme, location information outline catalogue
information circulated digitally to all participants
Pre-session Researcher
/Administrator
Any refreshments booked Administrator
All material prepared for inspection / handling where possible Pre-session Researcher/
Content Support
Any handling equipment needed prepared Content Support
Full catalogue information and any other available
contextual/source material provided digitally to Scribe
Pre-session Researcher/
Administrator
Hard copies of full catalogue information (and/or digital copies
on laptop) prepared and ready for inspection during the session
Pre-session Researcher/
Administrator
Ensure audio equipment checked and spare batteries supplied Administrator
After the session:
Timing Task Lead
1 week after All participants thanked for their contribution + expense claims
processed
Owner /
Administrator
All material returned to its usual location Content Support
30
1-3 weeks
after
1st draft of Template and Statement of Significance written and
circulated to all participants with stated deadline for comment
Scribe /
Administrator
3- 5 weeks
after
All participants to forward comments / responses to draft to Scribe All
4-6 weeks
after
Final Statement of Significance prepared and circulated to all
participants
Scribe /
Administrator
3-8 weeks Internal team to consider outcomes and implement any short-term
actions agreed
Internal team
31
Appendix 2 SIGNIFICANCE ASSESSMENT WORKSHOP – CASE STUDY
At a major central London arts and performance venue, the team commissioning a review of the centre’s
rich, but at that time underused, archive collection wanted to raise the archive’s profile with colleagues
from across the organisation and to let them know about the archive collections review project and its
progress.
About halfway through the project, the external consultants undertaking the review ran an informal
workshop drawing on the Significance Assessment Process approach and looking at just three ‘clusters’ of
material.
Participants relished the physical variety and intrinsic interest of the material they saw and immediately
recognised its potential to illuminate an unexpectedly varied range of topics relevant to 20th and 21st
century political, social and cultural history – as well as broadening the organisation’s understanding of its
own past.
The workshop followed a simple pattern readily adaptable to other organisations. Staff attending included
a cross section of staff including: senior managers heading up on engagement, learning, participation,
capital development and fundraising; specialist visual and performing arts learning and outreach officers;
the heritage participation associate who had commissioned the review; the centre’s archivist; a librarian;
two archive volunteers; and external members of the centre’s heritage advisory group.
Facilitated by the review consultants, the workshop ran for roughly two hours along the lines of the
schedule outlined below – with participants very fully engaged with the material and markedly reluctant to
leave at the end.
The workshop format: Session 1
Introductions
Participants were invited to explain their role in the centre, say briefly what (if anything)
they currently knew about the archive, how they might hope to use its contents in the
future and how they felt it could be used to support the centre.
10 mins
Communicating the significance of the archive collections
The consultants gave a brief outline of their findings to date, highlighting some ‘star’
items and explaining what they felt might be the strengths of the collection.
10 mins
Introducing ‘Reviewing Significance’
The consultants explained how the Significance Assessment Process is used to help
organisations develop a multi-layered understanding of their collections.
5 mins
Significance Assessment ‘taster session’
The whole group tried out together a much simplified, ‘taster‘ version of the
Significance Assessment Grid.
They assessed a ‘cluster’ of material relating to the defection to the West of one of the
Soviet Union’s star ballerinas in the early 1970s, on the eve of her final appearance in
her company’s summer season at the centre.
The key objects in the group of material were the dancer’s discarded ballet shoes,
marked with her name and found in her dressing room bin on the day after her
defection by a casual backstage visitor – who herself happened to have worked on the
building of the centre as a young architectural student.
The shoes were supported by correspondence, programmes, ticket stubs, photographs
and sensational news cuttings covering the defection – all from the archive collection.
35 mins
32
As a very short introduction to the archive and the Significance Assessment approach, that taster session
could have been enough – just followed by a short feedback discussion. In fact, in order to give participants
a better understanding of the breadth of the collections, the afternoon included a second session:
Session 2
Significance Assessment workshop
Breaking into smaller discussion groups the participants were invited to split into two
groups and pool their perspectives on two further ‘clusters’ from Archive collections –
one based around a highly glamourous charity concert featuring Frank Sinatra in the
early 1970s, the other on an exhibition held at the centre’s art gallery in 2004.
35 mins
Feedback
Each of the small groups fed back to the room with a brief ‘significance statement’ about
their ‘clusters’
15 mins
Wrap up discussion 10 mins
END
Other small groups of material were put on display for participants to inspect during and after the session –
just to give them a feel for the archive’s scope and range.
A staff member not connected with the archive review project commented:
The fact that you are cataloguing this and providing access later on means that people will know what
amazing stuff you have – if you were putting on an exhibition about the Cold War or the Iron Curtain
no-one would think of approaching the Centre for something that’s relevant to that story – so just that
process of telling people what’s actually there will be incredible and really put you on the map in a
different way which will be amazing.
Participant at project workshop