On the Interpolation Algorithm Ranking

Preview:

DESCRIPTION

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking. Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay. - PowerPoint PPT Presentation

Citation preview

On the Interpolation Algorithm Ranking

Carlos López-Vázquez

LatinGEO – Lab

SGM+Universidad ORT del Uruguay

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.

What is algorithm ranking?

There exist many interpolation algorithms

Which is the best? Is there a general answer?

Is there an answer for my particular dataset?

How to define the better-than relation between two given methods?

How confident should I be regarding such answer?

What has been done?

N points sampled somewhere Subdivide N in two sets: Training Set {A} and Test Set {B}

A∩B=Ø; N=#{A}+#{B}

Repeat for all available algorithms:

Define interpolant using {A};

Compare? Typically through RMSE/MAD

Better-Than is equivalent to lower-RMSE

Many papers so far

Permanent interest

How is a typical paper? Takes a dataset as an example

{A} {B}

blindly interpolate at locations of {B}

Compare known values at {B} with those interpolated ones

Is RMSE/MAD/etc. suitable as a metric?

Different interpolation algorithms lead to different look

RMSE might not be representative. Why?

Images from www.spatialanalysisonline.com

Let’s consider spectral properties

Some spectral metric of agreement

For example, ESAM metric

U=fft2d(measured error field), U(i,j)≥0

V=fft2d(interpolated error field), V(i,j)≥0

ideally, U=V

2 2

2( , ) arccos

i ii

i ii i

u vESAM U V

u v

0≤ESAM(U,V)≤1

ESAM(W,W)=1

Hint!: There might be better options than ESAM

How confident should I be regarding such answer?

Given {A} and {B}a deterministic answer

How to attach a confidence level? Or just some uncertainty? Perform Cross Validation (Falivene et al., 2010)

Set #{B}=1, and leave the rest with {A}

N possible choices (events) to select B

Evaluate RMSE for each method and event

Average for each method over N cases

Better-than is now Average-run-better-than

Simulate Sample {A} from N, #{A}=m, m<N

Evaluate RMSE for each method and event, and create rank(i)

Select confidence level, and apply Friedman’s Test to all rank(i)n wines judges each rank k different wines

The experiment

Apply six algorithms

Evaluate RMSE, MAD, ESAM, etc.

Evaluate ranking(i)

Evaluate ranking of means over i

Apply Friedman’s Test and compare

DEM of Montagne Sainte Victoire (France)

Sample {B}, 20 points, held fixed

Do 250 times:

Sample {A} points

Results

Ranking using mean of simulated values might be different from Friedman’s test

Ranking using spectral properties might disagree with that of RMSE/MAD

Friedman’s Test has a sound statistical basis

Spectral properties of the interpolated field might be important for some applications

Questions?

Thank you!

Results

Other results, valid for this particular dataset Ranking using ESAM varies with #{A}

According to ESAM criteria, Inverse Distance Weighting (IDW) quality degrades as #{A} increases

According to RMSE criteria, IDW is the best

With a significative difference w.r.t. the second

With 95% confidence level

Irrespective of #{A}

According to ESAM criteria, IDW is NOT the best

Other possible spectral metrics (to be developed)

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.01810

7

108

109

1010

1011

1012

1013

1014

Wavenumber [1/m]

Ene

rgy

[m2 ]Results for N=500 data points

ReferenceIDWV4NearestGRIDFIT w/LaplacianGRIDFIT w/smooth=0.5GRIDFIT w/smooth=2

Recommended