Upload
rafe
View
11
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Stream Survey Protocol Comparisons; In search of truth, comparability and reality. Brett Roper ( [email protected] ) Aquatic Monitoring Program Lead USDA Forest Service. Overview. - PowerPoint PPT Presentation
Citation preview
Stream Survey Protocol Comparisons; In search of truth,
comparability and reality.
Brett Roper ([email protected])
Aquatic Monitoring Program Lead
USDA Forest Service
Overview
• Precision – this addresses how repeatable a given monitoring protocol is. The more repeatable the more confidence when have in any one observation (making a call at the stream reach).
• Are the results related to the “truth”.• Are there relationships among protocols – can data on
attributes be shared even if protocols are different. • If the question is of a high level indicator, say stream
health, are monitoring groups comparable?
Repeatability within a protocolRepeatability within a protocol
• Low coefficient of variationLow coefficient of variation
• High signal to noiseHigh signal to noise
• Discrimination among streamsDiscrimination among streams
Insures that different evaluations of a stream by a Insures that different evaluations of a stream by a monitoring program draws the same conclusion monitoring program draws the same conclusion regardless of who does the survey.regardless of who does the survey.
..
Lets look at the within group variability of gradient for the 12 streams
PIBO UpCol AREMP ODFW-S EMAP CFG
Mean 3.332583 3.73149 3.352407 3.481704 3.601553 3.40905
RMSE 0.237552 0.394455 0.198897 0.763588 0.492544 1.00649
CV 7.128166 10.57097 5.93971 21.91885 13.67588 29.52406
S:N 124.4179 49.23136 188.2179 14.08451 28.65087 4.87883
%Observer 0.007973 0.019908 0.005285 0.066293 0.033726 0.170102
%Stream 0.992027 0.980092 0.994715 0.933707 0.966274 0.829898
Groups 9 6 8 4 6 4
PIBO UC AREMP ODFWL EMAP CFG NIFC
Gradient A(1) A A(1) B A B
BF Width A(1) B B D C B A
WD C D B C D D B(1)
Sinuosity B B A(1) B
Entrenchment F F F(1) F F F(1)
% Pool D D B B D F B(1)
Res Pool Depth A(1) A(1) A(1) A(1) A(1) D A(1)
D50 B(1) C C F
Fines B(1) D B D C C
LWD # A(1) B A(1) B B F D
Most groups have repeatable protocols –A and B’s good for sites. B,C, and D’s good for large scale surveys that rely on large sample sizes or detecting large changes. F’s will be difficult to use in determining status or trend at a site or across sites.
If most protocols are OK (an no one can force them to change) the question becomes, are they related to each other and/or the ‘truth”
Some attributes appear to easily crosswalk, even if there is some variability within a
monitoring group
0
2
4
6
8
1 0
1 2
Gra
die
nt
T R U T HU CP IB OA R E M PO D F W SE M A PC F G
Truth to protocol; Gradient- All are good even if they were not of the
same repeatability.
Truth =0.05307+1.02799(AREMP) r2=0.995Truth =-0.30709+1.20253(CFG) r2=0.980Truth =-0.19464+1.04283(EMAP) r2=0.993Truth = 0.09352+0.9498(ODFWS) r2=0.963Truth =0.03793+1.05197(PIBO) r2=0.993Truth = -0.21219+1.00497(UC) r2=0.992
Sinuosity; By Stream
1
1 .1
1 .2
1 .3
1 .4
1 .5
1 .6
Sin
uo
sity
T ru th
U C
P IB O
A R E M P
E M A P
Sinuosity – By stream type
1
1 .1
1 .2
1 .3
1 .4
1 .5
1 .6
S tep P o o l P o o l R iffle P lan e B ed
A R E M P
E M A P
P IB O
U C
Truth to protocol; Sinuosity
Truth =0.10032+0.9172(AREMP) r2=0.93
Truth =0.18409+0.86456(EMAP) r2=0.95
Truth =0.31618+1.21918(PIBO) r2=0.76
Truth =0.40388+0.65961(UC) r2=0.87
Bankfull
PIBO UpCol AREMP ODFWS EMAP CFG NFIC
Mean 4.5653 4.0053 7.403117 6.164332 5.267525 6.097837 5.903125
RMSE 0.33124 0.5699 1.628248 2.575474 1.88915 1.484112 0.829206
CV 7.255621 14.229 21.99408 41.78026 35.8641 24.3384 14.04691
S:N 58.0629 20.22506 10.85588 2.808503 2.483763 6.774312 24.73901
%Obs 0.016931 0.047114 0.084346 0.26257 0.287046 0.128629 0.038852
%Stream 0.983069 0.952886 0.915654 0.73743 0.712954 0.871371 0.961148
Groups 6 4 3 4 3 4 4
Bankfull Width
0
5
1 0
1 5
2 0
2 5
BF
W
T ru thU CP IB OA R E M PO D F W SE M A PC F GN IF C
Truth to protocol; Bankfull.
Truth =-0.2986+1.2382(AREMP) r2=0.59Truth = 0.8448+1.1273(CFG) r2=0.63Truth = 0.6383+1.3139(EMAP) r2=0.73Truth = 1.8871+0.9654(NIFC) r2=0.57 Truth = 1.6484+0.9898(ODFWS) r2=0.65Truth = 1.6163+1.1731(PIBO) r2=0.59Truth = 2.1586+1.2536(UC) r2=0.52
Protocol to protocol; Bankfull pretty good across the board.
UC = 0.6621+0.4516(AREMP) r2=0.91
UC = 0.1320+0.6352(CFG) r2=0.95
UC = 0.1765+0.7939(EMAP) r2=0.95
UC = 0.4258+0.6158(NIFC) r2=0.98
UC = 0.6825+0.9898(ODFWS) r2=0.97
UC =-0.6227+1.0137(PIBO) r2=0.99
Width-to-Depth
0
1 0
2 0
3 0
4 0
5 0
6 0
Wid
th/D
ep
th
T ru thU CP IB OA R E M PO D F W SE M A PC F GN IF C
Truth UC PIBO AREMP ODFWS EMAP CFG NIFC
Truth 1 0.0320 0.334 0.006 0.336 0.121 0.003 0.32
UC 0.032 1 0.796 0.663 0.828 0.922 0.747 0.827
PIBO 0.334 0.796 1 0.519 0.743 0.694 0.503 0.731
AREMP 0.006 0.663 0.519 1 0.453 0.607 0.722 0.473
ODFWS 0.336 0.828 0.743 0.453 1 0.880 0.603 0.883
EMAP 0.121 0.922 0.694 0.607 0.880 1 0.789 0.906
CFG 0.003 0.747 0.503 0.722 0.603 0.789 1 0.716
NIFC 0.326 0.827 0.731 0.473 0.883 0.906 0.716 1
Sometimes protocol results may not be strongly related to the truth (this is difficult to say because truth had no data in streams with large values), but are still related to each other – Which is more important?
All the discussions up to now has focused on single attributes – but what about some higher level indicator called “stream health”
• If you add up the outcomes of several indicators do they give you the same picture across monitoring programs?
• How does standardizing results across several attributes affect signal-to-noise and conclusions about streams.
An example; using width/depth,% pools, residual pool depth, % fines and large wood count as indicators.
100*minmax
min)(
obs
Score
Standardize each of the attributes for each of the monitoring groups by applying the following equations;
100*minmax
min)(100
obsScore
% pool, residual pool depth, large wood count
width-to-depth, % fines
If you average these values
0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5 6 7 8 9 10 11 12 13
A R E M P
C F G
E M A P
N IF C
O D F W S
P IB O
U C
Big
Bri
dg
e
Ca
ma
s
Cra
ne
Cra
wfi
sh
Ind
ian
My
rtle
Po
tam
us
Tin
ke
r
Tra
il
WF
Lic
k
Wh
isk
ey
0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5 6 7 8 9 10 11 12 13
A R E M P
C F G
E M A P
N IF C
O D F W S
P IB O
U C
Then if you standardize to 1 to 100B
ig
Bri
dg
e
Ca
ma
s
Cra
ne
Cra
wfi
sh
Ind
ian
My
rtle
Po
tam
us
Tin
ke
r
Tra
il
WF
Lic
k
Wh
isk
ey
S/N
4.59
0.69
2.76
11.35
1.84
2.49
5.32
Index Score – 1 = Best, 12 = Worst
0
2
4
6
8
1 0
1 2
BF
W
U C
P IB O
A R E M P
O D F W S
E M A P
C F G
N IF C
UC PIBO AREMP ODFWS EMAP CFG NIFC
UC 1 0.636 0.723 0.432 0.613 0.716 0.603
PIBO 0.636 1 0.716 0.413 0.379 0.391 0.460
AREMP 0.723 0.716 1 0.396 0.549 0.716 0.549
ODFWS 0.432 0.413 0.396 1 0.459 0.603 0.878
EMAP 0.613 0.379 0.549 0.459 1 0.560 0.740
CFG 0.716 0.391 0.716 0.603 0.560 1 0.704
NIFC 0.603 0.460 0.549 0.878 0.740 0.7046 1
Relationship in the ranks of the index – all are significantly related (P<0.05) to each other but the strength of the relationship varies. This will need more work to relate to the ‘truth’.
So what did we learn from this protocol comparison
• Monitoring groups vary in repeatability for different attributes.• In most cases, monitoring group repeatability is at least acceptable and
correlated to what other groups are also measuring for that attribute.• Results from monitoring groups are related to some more strictly/
intensively defined “truth”.• The real Truth is defined by the monitoring objective; What are you
using Bankfull width for? It appears this question is sometimes forgotten in the search for repeatability.
• If the real question is some higher level indicator like “stream health”, data from different groups seem to be correlated.
• If the stream population from which the sample came from can be defined, then it may be possible to use monitoring group as a block effect when analyzing trend (when repeatability is acceptable) – status may be more difficult because each protocol measures a slightly different mental construct of and attribute.