View
23
Download
0
Category
Preview:
Citation preview
Characterization using block simulation
Project extensive documentation
11, September, 2013
CERENA
INDEX
• Chapter I – What is a block?
• Chapter II – Creating blocks
• Chapter III – Interpreting reality with blocks
• Chapter IV – Kriging with blocks
• Chapter V – Uncertainty characterization with blocks
• Chapter VI – Case Studies
p.3-18
p.19-29
p.30-36
p.37-42
p.43-48
p.49-63
Chapter I – What is a block?
• What is a block?
• What is a block value?
• What is a block centroid?
• What is a block error?
• What is a block file?
• Things to know about blocks?
• Types of block data
• Methods for calculating block value
• Methods for calculating block error
• Blocks bottom line
p.4-5
p.6-7
p.8
p.9
p.10
p.11-13
p.14
p.15-16
p.17
p.18
What is a block?
1
2
Let’s start with an image with
several pixels, each with it’s own
value.
You can imagine that each of the
pixels is actually a point with a
precise value and location in space.
Grid
Reference image
Point data
Legend
What is a block?
3 We create some divisions (for now it’s not important
the division method), some sets of points with less
resolution than the original image.
4 Now our point located space is divided
in different areas. Each of those areas is
something of a primordial block.
Legend
Point data
Divisions
What is a block value?
5 From the divisions we must analyze each of
the sets to determine its properties.
6 • Most common value is
• The probability value is “8/9 ; 1/9 “
These are two different methods for calculating a block
value for indicator variables. If variable is continuous than
it could be mean, for example.
Legend
Point data
Divisions
What is a block value?
7 Now all the block divisions have a value (for illustration
purposes let’s use the most common value).
Legend
Point data
Divisions
Point whose value
has changed in
the local block If most common
value is used…
8 Some of the points have changed value and so we
have lost some detail when changing an image into
blocks. For this reason (and others…) there is an error
for each block. We’ll talk more about that later.
What is a block centroid?
9 Each block has a centroid, a location in X,Y,(Z) which
can be used for search engines or variogram
(correlogram) calculation depending on the type of
implementation of sequential indicator block simulation.
10 We can actually start managing and using blocks
by its centroid location. In fact if we do so it would
similar to having a point-set of centroids with each
point having a specific value (and error).
Legend
Point data
Divisions
Block centroid
* If by any chance you thought the centroid symbol looks like a nipple you
should know that you’re a particularly libidinous person. Just saying…
What is a block error?
11 Now we have blocks, each with a number of points,
a centroid, and a value. We can also introduce an
error
0 0
0
0
0 0
0
0.5
0.25 0.11
12
Error, like value, is an user choice. Methods to
calculate error are:
• Using the probability value as error (image above).
• Using the size of the block (bigger the block bigger
the error).
• Using a map to attach errors like a map of distance
to the hard-data (the farther from hard-data the
bigger the error).
The last two methods need a user inserted range for
the error so that the upper error limit would be
equivalent to the bigger block, for example.
Legend
Point data
Divisions
Block centroid
Error
quantification
What is a block file?
BLOCK_FILE 99 block #0 0.94;0.00;0.00;0.05 0.3 55.0 0.0 55.0 1.0 55.0 2.0 … …
Probability for each bin Error
Points in block
Number of blocks in file
13
This is how a block file would look like. You have
general information like name of the block set and
number of blocks in file. In this example the value
of the block is the probability for each bin. After
indicating which block the information refers to
(all blocks are on the same file) it gives the
coordinates of the points inside that specific block.
If chosen method for value would be the most
likely value it would appear something like this:
• 1;0;0;0
You must have guessed that this particular
example has 4 indicator bins. If the value was
continuous (no indicator bins) a single value
would appear. The error always goes from 0 to
1 being 1 the maximum uncertainty possible
and 0 completely certain. A less uncertain block
will have less weight in the kriging procedure.
14
* No pretty things in this slide. Here’s Velázquez Rokeby
Venus to get you by.
Things to know about blocks?
A
A block doesn’t need to be as dense of
points as the image used to create it. In
fact You don’t even need an image to
create blocks
B
The block doesn’t need to have
a regular shape. In fact it is quite
flexible although implementing
an algorithm to deal with flexible
block shapes is computationally
intensive and may not be worth
the trouble.
Things to know about blocks?
C
Block management doesn’t need to be all
about centroids (it’s just faster). We could
use all points inside a block for search and
correlogram calculation.
D A block doesn’t need to be a small set of
points. It could be solid and a search be
done by portions.
Things to know about blocks?
E
Blocks don’t need to cover the entirety of
the study area. In fact you may want to
deliberately create high uncertainty areas
with no block information.
F
Even when optimized or minimized for a specific
purpose block simulation is still significantly more
computationally heavier than the common simulation
or co-simulation procedures. Block simulation is about
flexibility.
Co
mp
uta
tion
al t
ime
Types of block data
Lithology 1 (example: high porosity lithology)
Lithology 2 (example: low porosity lithology)
High porosity Low porosity
Indicator variable (discrete) Continuous variable
Nothing prevents a block being both indicator and continuous although it may not
be clear how to use this variable duality for characterization purposes.
Methods for calculating block value
Indicator variable (discrete)
An indicator variable is a probability variable or, more truthfully, the
probability of occurrence of a given discrete value. For this reason in the
top left block we have:
• Most common value is
• The probability value is “8/9 ; 1/9 “
Which means either:
• 1;0 (probability 1 (100 %) of having the lithology 1 and 0
(0%) of having lithology 2. Thus 1 ; 0 (1;0).
Or:
• 0.89;0.11 (probability 0.89 (89%) of having the lithology 1 and 0.11
(11%) of having lithology 2. Thus 0.89 ; 0.11 (0.89;0.11).
If we would have three lithologies (or any other discrete variable) we would
have three probability fields (“1;0;0” or “0;1;0”, etc.). The methods above are
the most obvious and you don’t need to calculate all blocks in your set with
the same method (both for value and error).
Methods for calculating block value
A continues variable works in its own order of magnitude. Porosity for instance
(percentage of empties or voids in a volume, being 100 % completely void).
Since there is no way of counting balls of porosity (without transforming the
variable into an indicator one) the best way to produce a value for a block is to
use the most common (and uncommon if you would like) statistical indicators
such as:
• Mean
• Median
• “n” Percentile
• Maximum
• Minimum
• Etc.
Continuous variable
Methods for calculating block error
0 0
0
0
0 0
0
0.5
0.25 0.11
Above is an example of a block error calculated
by the probability of not being the value of that
same block.
Should I decide my maximum
error is 0.5 and minimum is 0.2
than:
𝐸𝑟𝑟𝑜𝑟 =𝑠𝑖𝑧𝑒 − minimum size ∗ (𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑒𝑟𝑟𝑜𝑟 − 𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑒𝑟𝑟𝑜𝑟)
(𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑠𝑖𝑧𝑒 − 𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑠𝑖𝑧𝑒)+ 𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑒𝑟𝑟𝑜𝑟
3*3 = 9 2*2 = 4 2*2 = 4
2*2 = 4
2*2 = 4
1*2 = 2
1*2 = 2
1*2 = 2 1*2 = 2
2*
1 =
2
“Size matters” method
0.5
0.29 0.29
0.29
0.29 0.22 0.22 0.22
0.22
0.22 ### Error
Hard-data or
scenario marker
Block centroid
“Error mapping” method
0.5
0.3
0.3 0.3
0.35
0.35
0.35
0.3
0.25
0.2
“Disagreeing” method
Blocks bottom line
Block - set
A block-set is set of blocks. A block is an
entity composed of points* located in space
(and delimiting the size, shape and body of
the block). A block has:
• Value (discrete or continuous)
• Error
• Centroid
* For all implementations made so far (at least to our knowledge).
Value can be indicator (discrete) of continuous.
Depending on the type it can be calculated by
procedure such as:
• Most common element (indicator)
• Probability for each phase (indicator)
• Mean (continuous)
• Minimum (continuous)
• Maximum (continuous)
• Median (continuous)
• Percentile (continuous)
• Etc.
Error is measured from 0 (no error) to 1 (full error). This
presentations has proposed some methods to inquiry
error such as:
• “Disagreeing” method (indicator)
• “Size matters” method (both)
• “Error mapping” method (both)
Centroid is a point, in a central location of a block (mean
locations for X,Y,Z, for example), which facilitates
management of the set for search purposes, variogram
calculation, etc. It is a facultative “property” of a block and
its main advantage is to increase performance in a heavy
algorithm.
Chapter II – Creating blocks
• A simple net
• A less simple net
• Building a quadtree based procedure
• Void Selection Quadtree
• Marker tolerance
• Centroid management
• Meshing operations
p.20
p.21
p.22-24
p.25-26
p.27
p.28
p.28-29
A simple net
1 This is our grid (a bit bigger than
the one on Chapter I).
2 We’ve used a constant step of 2 to create
both horizontal and vertical divisions.
Grid
Divisions
Legend
A less simple net
3 Remember that it’s blocks we’re making but
this method gives the same shape on every
single one of them, thus there are no areas
more dense and with less uncertainty.
4 The ideal should be to have more blocks
in areas where we are more certain of
what exists there. But for that to happen
we need some kind of algorithm to do it.
Grid
Divisions
Legend
Block value
Building a quadtree based procedure
6 Let’s make our level one division. Cutting
the study area in halves. We got 4 areas,
3 of those are empty therefore saved as a
block area. Let’s go on to the next level.
5 Let’s set out a user rule that says wherever a
point marker (scenario marker ) exists
there should be a more dense net (smaller
blocks). Where none exists a less dense net. This kind of division is called quadtree since it always
splits the areas in four (quad) bits. ?
Grid
Divisions
Legend
Scenario
marker
Building a quadtree based procedure
8 Going on to level 3 (let’s stop leveling here)
and we got three more void areas and one
filled. The final result for this particular test is…
7 Level two we continue to split the areas
where scenario markers ( ) remain. There
are 3 new empty areas (voids) and 1 still has
a marker.
Grid
Divisions
Legend
Scenario
marker
Building a quadtree based procedure
9 The result is that smaller blocks and denser areas
appear near the marker . Since this grid doesn’t
have a lot of blocks problems such as the division of
odd numbers arise, however in bigger grids (more
nodes) this problem becomes non-existent.
10 You can imagine that the more marker the more
dense areas will exist. You can use markers to
actually build blocks set that are adequate to your
case study. Since it was based on the
quadtree concept and
creates blocks wherever voids
are found we’ve called this
procedure “Void Selection
Quadtree”*.
?
* What can I say. We’re scientists not poets. Although that career
did cross my mind…
Grid
Divisions
Legend
Scenario
marker
Void Selection Quadtree
Let’s review. We start with an empty, undivided area
which we start splitting, level by level, in four smaller
areas. But we only split if there’s a marker inside the
area. The user choses the number of levels it wants
to use. This is “Void Selection Quadtree”.
12 So now we must take into account that his hole net
created must be made to generate blocks. Following
the procedures explained on chapter one we would
get something like the example above.
Grid
Divisions
Legend
Scenario
marker
11
Void Selection Quadtree
Even if the user is not happy with the final result or he
somehow finds that some areas should have more
detail (and notice we’re making a parallel between
detail and certainty) we can build a new model with a
new marker that will not change the previous blocks.
The procedure has little to none chaotic behavior.
14 We’ve introduced a new marker and got an update.
Better resolution in the area shown above. Working
with this methodology is simple although not without
some more secondary procedures that should be
explained.
Grid
Divisions
Legend
Scenario
marker
13 Update
marker
Update
area
Marker tolerance
Sometimes the phenomena shown in the top left net
happens. A point is too close from the division limit and in
the end in only has detail in one of the sides. To solve this
we can use a location tolerance so that when a split area
does not have a marker but it’s within the tolerance of one
it should continue to be split. Notice how better the result
was when using an adequate tolerance (user chosen).
16 There must be some careful planning with marker
tolerance. Too much and the net will loose its
purpose.
15
Grid
Divisions
Legend
Scenario
marker
Marker
tolerance
Meshing operations
The idea of Void Selection Quadtree proposed in this presentation was made specifically to be
adequate to the majority of case studies we’ve been having. However is far from the only
solution available. Meshing operations are common in many sciences (3d modeling for
instance) and you should have some other ideas of what you can use. Just keep in mind that
the objective is to have a net which will be used to create blocks.
This is a triangulation and the objective is to make triangles
between all the markers. There are quite some methods available
to do this (Delaunay) but more importantly notice that the denser
the marker the smaller the triangles will be.
There’s a bit of a problem thought. A triangle is not a good
shape to retrieve blocks from a reference image in a regular grid
(where all nodes have the same size, quite common in
geostatistics). Instead we could use something a bit more stable
like quadrangulation.
A
Meshing operations
This is a quadrangulation. Instead of making triangles it makes 4
angled shapes which are not necessarily squares (although if
your had some really well chosen markers you could get perfect
squares).
To get perfectly adequate rectangular shapes (dealing only with
90 degrees angles) you must use some method like the one
proposed previously (VSQ).
B
Chapter III – Interpreting reality with
blocks
• The thing about reality…
• The assumption…
• Building blocks
• Analyzing the blocks
• Updating our blocks
• Error mapping blocks
p.31
p.32
p.33
p.34
p.35
p.36
The thing about reality…
Reality as in what literally exists is not a working concept
since if you knew that, this presentation wouldn’t be in
the first place. What we do work with is our belief (by
experience perhaps) of what is reality. Therefore an
interpretation. Thing of something as the image above.
A well trained eye could leap into something more
recognizable. The interpreter did expect that prior
image was a stratigraphic profile, even if a low
resolution one. It notices however that some
geologic phenomena is in place.
1
2
Grid
Low-res image
Legend
The assumption…
That geologic phenomena may actually change the
current view of things since the folds seem too strong
for lithologies present in place. There must be faults…
And faults this strong may originate discontinuities.
Discontinuities lead to traps. Its quite the big leap but
by interpretation alone a strong, probable scenario
was created. Of course there’s quite an amount of
uncertainty here.
3
4
Interpretation
Fault
Legend
Building blocks
So we do have a model. A reference image. But it is at
most an educated guess. Does it bode well with the
retrieved variogram model from hard-data? Can we
characterize our model by using this assumption?
Let’s provide a block scenario where we use our well
data as markers and leave the fault areas with higher
uncertainty.
5
6
Interpretation
Fault
Legend
Scenario
marker
Divisions
Variogram
directions
Analyzing the blocks
Not the best mode ever generated but the main
purpose is there. We’ve given no marker tolerance since
we got smaller blocks on the center (between two
wells) than in the exterior areas where information lacks.
We can now do simulation with this block set but it
would be cautious to see if all the structures we know to
be the most likely scenario are represented. The truth is
we hardly could perceive something like a fault there.
We have, at most, the generalist behavior for our study
case represented in those blocks.
7
8
Block scenario
Fault
Legend
Scenario
marker
Divisions
Updating our blocks
Using hard-data as markers hasn’t given us the probable
scenario we were expecting. For this reason a few
updates markers (that are not hard-data) were used.
The new update still hasn’t gave us the level of detail
that could be considerate most adequate. Still in the
middle fault the most important breaks are represented.
So let’s go on.
9
Block scenario
Fault
Legend
Scenario
marker
Divisions
Update
marker
10
Error mapping blocks
So we’ve increased our detail near faults but still want
uncertainty when away from the hard-data. That’s
why we are mapping some error to the blocks.
Now we have a block scenario whose error is a
function of distance to the hard-data. We could now
proceed to characterization. 10
11
Block scenario
Fault
Legend
Scenario
marker
Error colored
centroid
Chapter IV – Kriging with blocks
• Let’s review kriging
• Block kriging
• Using error
• Sequential block simulation
• Block kriging facts
p.38
p.39
p.40
p.41
p.42
Let’s review kriging
We want to solve this algebraic system in order to retrieve the W’s
(weights). These weights are than multiplied with the values from the
samples 1,2 and 3 and summed together to retrieve the kriged value of
the node*.
Simulation grid
Hard-data
Node to be kriged
1
2
3
k
Simple kriging
example. With
ordinary kriging it
would have an extra
row and column.
Point
Point Node to be
kriged
1
2
3
1 2 3 k
11
22
33
12 13
23 21
31 32
1k
2k
3k
W
W
W
1
2
3
* I’m not being strictly literal. In fact since this is simple kriging the kriged value would be the result of this operation plus the
user input mean. If this would be ordinary kriging (see comment above system) there would be no need for this.
W 1 1 2
W 2 3
W 3 + + k
Block kriging
So the procedure would be the same. Solve the system (see previous slide
for detailing simple or ordinary kriging) and use the weights to multiply by
the hard-data and block values retrieving the kriged value.
Simulation grid
Hard-data Block centroid
Node to be kriged
Po
int
Blo
ck
Point Block Node to be
simulated
W
W
W
1
2
3
W
W
W
4
5
6
1
2
3
1 2 3
11
22
33
12 13
23 21
31 32
4
5
6
4 5 6
41
52
63
42 43
53 51
61 62
14
25
36
15 16
26 24
34 35
44
55
66
45 46
56 54
64 65
1k
2k
3k
4k
5k
6k
W 1 1 2
W 2 3
W 3 + + k W
4 4 W
5 5 W
6 6 + + +
1
2
3
k 4
5
6
Using error
So the specific error for each block is subtracted
to the correlogram values of blocks. In theory the
system is ready to be solved however since this
operation cannot guarantee that the maximum
values are on the system diagonal all values are
normalized so that the diagonal is strictly
composed of correlogram of value 1*.
* Please consider that this illustration is not taking into account the type of kriging the user might want to use.
We’ve been using ordinary kriging which would add a new line to the system shown in the schematic.
Po
int
Blo
ck
Point Block Node to be
simulated
Error
W
W
W
1
2
3
W
W
W
4
5
6
Point to
point
Point to
block
block to
point
Po
int to
no
de
Block to
node
1 -
1 -
1 -
block to
block
W
W
W
1
2
3
W
W
W
4
5
6
Sequential block simulation
Block simulation works pretty much like the common sequential
simulation procedure. That is: already simulated nodes count as point
for kriging system. In our implementations we give no distinction
between hard-data and node-data. So search is done in two phases,
first for points and nodes, second to blocks. So there is always points
and blocks in the kriging system.
Simulation grid
Hard-data
Block centroid
Simulated node
Node to be
simulated now
1
2
3
k 4
5
6
Point Block Node to be
simulated
W
W
W
1
2
3
W
W
W
4
5
6
1
2
3
1 2 3
11
22
33
12 13
23 21
31 32
4
5
6
4 5 6
41
52
63
42 43
53 51
61 62
14
25
36
15 16
26 24
34 35
44
55
66
45 46
56 54
64 65
1k
2k
3k
4k
5k
6k
Po
int
Blo
ck
Block kriging facts
A Blocks enter like points in the kriging matrix and are used, like points, to calculate the kriging mean.
B Error is a block property and so is only used in blocks (as far as block kriging goes).
C The simulation procedures are done pretty much as they used to. Simply we have a new kind of
secondary information.
Chapter V – Uncertainty
characterization with blocks
• What is uncertainty?
• Kinds of uncertainty
• Using blocks for uncertainty
p.44-46
p.47
p.48
What is uncertainty?
Uncertainty is a measure of some kind of variable we are not to sure about. Which
means that, depending on the quantity of uncertainty, the same procedure may have
more or less possible outcomes.
Distance done
Fu
el s
pe
nt
We are doing some distance with some vehicle. We would
like a model of how much the trip will cost. The thing is,
depending on numerous parameters, the fuel
consumption may be more or less (represented by ).
Measuring uncertainty we got this . From here we
could even improve our model to give probabilities based
on the other missing parameters.
The problem is that some case studies have so many
variables of fundamental importance that an analytical
solution to uncertainty is impractical. This is the reason that
gave origin to sequential simulation in geostatistics.
A single node depends on all its neighbors (and perhaps
other factors). And the node to be calculated after this one
depends on the value of the first. When our study case has
thousands to several million nodes (quite common) what
are you going to do?
What is uncertainty?
Simulation 1
Simulation 2
Simulation 3
Hard-data ( ), variogram ( ),
and the kriging mean image ( ). We’ve made several simulations with all the statistical and spatial information we
could find an than we’ve calculated the mean of simulations (the more
simulations, the closest to the kriging mean), and variance, among others
(minimum, maximum, percentile, etc.).
What is uncertainty?
Simulation 1 Simulation 2
Simulation 3
With our parameterization we can see the areas where our
uncertainty (by means of variance) is higher.
Low
variance
High
variance
Kinds of uncertainty
Although the slides above gave some kind of conceptual
idea of what uncertainty is it can still be focused on some
aspects that may be more relevant to your case study.
For instance while studying permeability you may find that
the range equivalent to lower permeability are not relevant
to your study, thus your intended uncertainty being the
higher variance in high permeability areas.
Some studies are focused only on the higher or lower
probability margins (P10, P90, sometimes called) since these
are, commonly, the frontier areas of the bodies (when thing
change rapidly).
Univariate
distribution
Bodies frontier
Anisotropy
characterization Examples of possible
uncertainty fields in spatial
sciences.
Using blocks for uncertainty
Since blocks are a secondary information support with high
flexibility It is possible to insert heuristic knowledge
(experience) in the model.
In fact by using block characterization, block appearance
(some areas may not have blocks), and block error it’s possible
to provide a localized feature in uncertainty.
Primary data and
parameterization
Block scenario
Chapter V – Case Studies
• Uncertainty over a deterministic model
• The noisy channel
• Software development
p.50-54
p.55-59
p.60-63
Uncertainty over a deterministic model
We’ve got both sampling
data (borehole among
others) and lithological
deterministic model of a mine
in Portugal. The thing is the
model does not have any
kind of uncertainty study.
Authors: Júlia Carvalho, Pedro Correia, Amílcar Soares
Uncertainty over a deterministic model
We’ve used our Void Selection
Quadtree procedure to build the
block net using as markers strictly the
hard-data (since we are trying to
perceive the level of uncertainty
when you get farther from the real
data).
Uncertainty over a deterministic model
We’ve simulated and got
the most likely image.
Actually it’s pretty close to
the original.
Uncertainty over a deterministic model
The main model is there
but where are the high
uncertainty areas? We’ve
searched areas where the
value, by comparison with
the reference image, was
mostly disagreeing (80 %
wrongs).
The white dots are the most problematical areas. Notice that they are located in small
bodies, frontier areas, changing anisotropy areas, etc.
Uncertainty over a deterministic model
Many tests where made as well other information retrieved such as
contingency tables. This was the first case where sequential
indicator block simulation was used. The results were promising.
The noisy channel
This an attempt to retrieve and characterize a channel from a noisy image of acoustic
impedance. The hard-data was scarse and most of the project was conducted using the
acoustic impedance model.
ACOUSTIC IMPEDANCE CLUSTER ANALYSIS LOCAL MORAN’S I
Attributes were used to
add some contrast to the
model. The channel is the
set of clusters that go from
low left to top right.
Authors: Ângela Pereira, Ana Inês, Pedro Correia, Amílcar Soares
The noisy channel
A result from an attribute was
chosen and a main shape of the
channel interpreted (mouse picking
shape) thus transforming a
continuous variable to indicator one
(in the first tests was the no-channel
class, and the channel class).
INTERPRETED SHAPE INDICATOR REFERENCE
IMAGE
The noisy channel
A block set was built with markers
(not hard-data) which stand in the
areas where the interpreter was
most certain of the existence of the
channel.
BUILT BLOCK NET NEW BLOCK SET
The noisy channel
The simulations were
made a most likely image
retrieved, a probability
map calculated, and finally
a P10, P90 uncertainty
analysis.
• REFERENCE IMAGE
• BLOCK SET AND HARD-DATA
• MOST LIKELY IMAGE
• PROBABILITY MAP
P10, P90 EVALUATION
The noisy channel
This project lead to the further development of sequential indicator simulation and other
support tools. In fact a software was developed for this purpose. The project is not over and
tests are still being made in both synthetic and real data.
Software development
A software was designed and developed for the block characterization
project. It as most of the procedures explained in this presentation (and
others) and it was made to provide a platform for sequential indicator
block simulation.
Object manager (where you
chose most of the stuff).
Object viewer (updated
automatically by object
selection on object
manager).
First buttons on the toolbar. You use them to
import your data (there are four types of data
recognized by ShapeWORKS).
Context menus in the object
manager (called by right click button
on the object). You use them to call
for all sort of operations menus
(some specific to each object type).
1
2
3
4 version 1
version 1
Developer: CERENA
Software development
Phasic interpretation allows a manual creation of an indicator image by
mouse clicking on screen. 1
By clicking the “Interpret” button on the new frame you’ll get a smaller one (which can be maximized)
where you click the vertices of the shape you want to make. To close a shape just click the first vertices (it will
appear as a red dot instead of white). You can do more than one shape on the same phase (selected in the “Phasic interpretation” frame). To finish mouse click interpretation just press the “X” . When you finish
interpretation of all your phases just press the “Mark as object” button. Notice the preview panel will always show the interpretation of all phases with different
colors.
2
Example of phasic interpretation. 3
Interpretation tool
example
Software development
A web is a new kind of object (used to create blocks). This procedure is
used to create a web from the point markers.
1
From this frame the for the blocks is created. If you chose as “Web style” constant than the point markers won’t be
used. But both VSQ (void quadtree selection) use the markers as can be shown on this preview. Notice the
smaller blocks are closer to the markers.
2
Web creation tool
example
Software development
Some procedures where never included in the software although that was the
intention (the development phase was getting to long). This is the case for both
the mesh and point variogram. The buttons exist but they were never
implemented.
Also some other things were pointed out by the alpha users such as:
1) Visualizing blocks from its centroids is not easy.
2) “Use all VSQ” method not using all blocks available.
3) In the error mapping frame the first image is not being updated when the
object is changed.
4) While exporting some headers may not be correct.
NOTICE that this documentation was written in the 10th of September, 2013. It is
unlikely that this developer will add new functionalities to the software unless its
utterly necessary for the research project for which it was built to. The software is,
therefore, considered adequate to its purpose and its continuous development
halted.
Template
THE END
Recommended