108
Statios Software and Services www.statios.com Pangeos Documentation and User’s Guide Version 1.3 Copyright © 2008

Guia de Usuario 1.3

Embed Size (px)

Citation preview

Page 1: Guia de Usuario 1.3

Statios Software and Services www.statios.com

Pangeos Documentation and User’s Guide

Version 1.3

Copyright © 2008

Page 2: Guia de Usuario 1.3

The Pangeos Software is aimed at the geostatistician with some modeling experience. We provide tools for geostatistical simulation, checking realizations, and post-processing realizations for recoverable reserves and decision-making. The goal here is to document the implementation decisions and details taken in Pangeos so that the user knows how to choose the correct parameters and arrive at the best solution.

Documentation cannot answer all of your questions. Send questions to [email protected] and we will answer promptly.

1

Page 3: Guia de Usuario 1.3

1 Introduction.............................................................................................................. 1-1

1.1 Scope of Pangeos / Licensing .......................................................................... 1-1 1.2 Principles and Software Layout ....................................................................... 1-3 1.3 Data Objects..................................................................................................... 1-7 1.4 Internal Objects in Pangeos ........................................................................... 1-11 1.5 Tutorial One: Project Setup ........................................................................... 1-13

2 Data Analysis Tools................................................................................................. 2-1

2.1 Histograms ....................................................................................................... 2-1 2.2 Probability Plots............................................................................................... 2-2 2.3 Scatterplots....................................................................................................... 2-3 2.4 Q-Q Plots ......................................................................................................... 2-4 2.5 2-D Slice Maps ................................................................................................ 2-5 2.6 Tutorial Two: Data Analysis............................................................................ 2-6

3 Preprocessing Tools ................................................................................................. 3-1

3.1 Declustering ..................................................................................................... 3-1 3.2 Despiking and Transformation ........................................................................ 3-5 3.3 Trend Modeling ............................................................................................... 3-8 3.4 Category Model From Polygons.................................................................... 3-10 3.5 Local Probability Model From a Categorical Model..................................... 3-11 3.6 Tutorial Three: Preprocessing........................................................................ 3-12

4 Variogram Analysis Tools ....................................................................................... 4-1

4.1 Variogram Volume Calculation....................................................................... 4-1 4.2 Directional Variogram Calculation.................................................................. 4-2 4.3 Variogram Fitting............................................................................................. 4-7 4.4 Average Variogram Calculation .................................................................... 4-10 4.5 Histogram Scaling.......................................................................................... 4-11 4.6 Variogram Scaling ......................................................................................... 4-11 4.7 Variogram Model Computation..................................................................... 4-12 4.8 Variogram Plotting......................................................................................... 4-12 4.9 Tutorial Four: Variogram Calculation and Modeling.................................... 4-12

5 Geostatistical Modeling Tools ................................................................................. 5-1

5.1 Kriging ............................................................................................................. 5-1

2

Page 4: Guia de Usuario 1.3

5.2 Sequential Indicator Simulation....................................................................... 5-5 5.3 Sequential Gaussian Simulation ...................................................................... 5-6 5.4 Tutorial Five: Geostatistical Modeling .......................................................... 5-11

6 Postprocessing Tools ............................................................................................... 6-1

6.1 Accuracy Plots ................................................................................................. 6-1 6.2 Models Merging............................................................................................... 6-1 6.3 Extract from Grid............................................................................................. 6-2 6.4 Average within Domains ................................................................................. 6-3 6.5 Model Operations............................................................................................. 6-3 6.6 Model Rescaling .............................................................................................. 6-4 6.7 Categorical Model Cleaning ............................................................................ 6-5 6.8 Realization Histogram Correction ................................................................... 6-6 6.9 Reverse Stepwise Transformation ................................................................... 6-6 6.10 Recoverable Reserves Calculation................................................................... 6-7 6.11 Analyze Multiple Realizations......................................................................... 6-7 6.12 Source of Uncertainty ...................................................................................... 6-8 6.13 Realization Ranking......................................................................................... 6-9 6.14 Resource Classification.................................................................................... 6-9 6.15 Reporting within Categories .......................................................................... 6-11 6.16 Tutorial Six: Postprocessing .......................................................................... 6-12

7 Grade Control Tools ................................................................................................ 7-1

7.1 Expected Profit Calculation ............................................................................. 7-1 7.2 Dig Limit Optimization.................................................................................... 7-2 7.3 Dig Limit Reporting......................................................................................... 7-3 7.4 Tutorial Seven: Grade Control......................................................................... 7-4

8 Data Tools................................................................................................................ 8-1

8.1 Data Tools access............................................................................................. 8-1 8.2 Data Tools available for all data types............................................................. 8-1 8.3 Drillhole Data Tools ........................................................................................ 8-2 8.4 Model Data Tools ............................................................................................ 8-2

8.4.1 3D Model Data Tools............................................................................... 8-2 8.4.2 2D Model Data Tools............................................................................... 8-3

8.5 Surfaces Data Tools ......................................................................................... 8-3

9 Scripting................................................................................................................... 9-1

9.1 Running a script ............................................................................................... 9-1 9.2 Script structure ................................................................................................. 9-1 9.3 Keywords ......................................................................................................... 9-1 9.4 ScriptVariable .................................................................................................. 9-1

9.4.1 Types........................................................................................................ 9-1

3

Page 5: Guia de Usuario 1.3

4

9.4.2 Declarations ............................................................................................. 9-2 9.4.3 Assignment .............................................................................................. 9-2 9.4.4 Special ScriptVariables............................................................................ 9-2

9.5 Computation..................................................................................................... 9-2 9.5.1 Syntax ...................................................................................................... 9-2 9.5.2 Computation file ...................................................................................... 9-3

9.5.2.1 BEGIN ................................................................................................. 9-3 9.5.2.2 MAIN................................................................................................... 9-3 9.5.2.3 END ..................................................................................................... 9-3

9.5.3 A few elements of C syntax ..................................................................... 9-3 9.5.4 Recognized functions............................................................................... 9-4

9.6 Examples.......................................................................................................... 9-4 9.6.1 Building filename for logging.................................................................. 9-4 9.6.2 Reporting time ......................................................................................... 9-4 9.6.3 Computation............................................................................................. 9-4

10 Visualization .......................................................................................................... 10-1

10.1 Window.......................................................................................................... 10-1 10.2 Keyboard shortcuts ........................................................................................ 10-1

Page 6: Guia de Usuario 1.3

1 Introduction

1.1 Scope of Pangeos / Licensing There are a number of general mine planning (GMP) software packages on the market. GMP software is well-suited to data management, conventional geologic modeling, visualization, and certain mine planning tasks. There are a number of specialized geostatistics software packages on the market; however, there are no geostatistical software packages that provide a satisfactory solution to the problems of (1) reliably generating geostatistical realizations of multiple variables within multiple rock types, and (2) providing optimized solutions to reserve estimation and grade control in presence of uncertainty quantified by multiple geostatistical realizations. Pangeos meets this specialized mining geostatistics simulation and optimization niche.

Many of the tools in Pangeos were written specifically for Pangeos and are not based on any specific seed code. Certain numerical engines in Pangeos are based on the latest versions of GSLIB and related research programs, mostly from the Centre for Computational Geostatistics at the University of Alberta. The software was heavily modified from the initial versions released to all CCG industrial affiliates. The six modules provided by Pangeos:

1. Data Analysis. Prior to any geostatistical modeling, some basic data statistical analysis is required. Pangeos includes some of the basic tools such histograms, probability plots, scatter plots and QQ plots. They are provided to check that data is imported correctly and to analyze some of the results of the different geostatistical tools in Pangeos.

2. Preprocessing. Modern geostatistical simulation tools are very dependent on the right choice of geological populations or rock types (stationarity), establishing large scale trends, declustering to get representative distributions, and transformation to establish variables that are suitable for geostatistical simulation. A complete set of tools for these purposes are presented in preprocessing.

3. Variogram Analysis. The faithful variogram remains the main tool to quantify spatial correlation. The reasonableness of geostatistical simulation depends on anisotropy detection with variogram volumes, flexible calculation of directional variograms and correlograms, effortless variogram fitting, and error-free transfer of those fitted models to kriging and simulation. A complete set of variogram analysis tools are available in Pangeos.

4. Geostatistical Modeling. Kriging and Gaussian simulation within rock types are the core tools for geostatistical modeling. The fast and efficient processing of multiple rock types, multiple variables, and multiple realizations is essential in modern geostatistical simulation and optimization. A complete set of kriging and cosimulation tools are provided.

5. Postprocessing. The creation of a set of geostatistical realizations is just the beginning in modern geostatistical simulation and optimization. Realizations must be checked for data and statistics reproduction, block averaged to varying selective mining unit (SMU) sizes, and combined with various calculations. Recoverable reserves must be calculated for reporting and mine planning for SMUs or arbitrary volumes and time periods. The blocks may be classified according to measured, indicated and inferred using probabilistic criteria. Pangeos aims to provide the required postprocessing tools.

1-1

Page 7: Guia de Usuario 1.3

6. Grade Control. One interesting aspect of modern geostatistical simulation is the calculation of risk-qualified profit for optimal ore/waste classification and grade control. An important second step is the determination of optimal dig limits that account for the mining conditions and equipment. Tools are provided in Pangeos for this purpose.

Pangeos is an integrated Windows based software application for these tasks. The user interface is based on the Microsoft® Windows and .NET standards for buttons and dialogs. The management of data and parameter files is similar to most e-mail programs such as Outlook® making Pangeos familiar to most geologists and engineers and ensuring efficient usage.

The focus of Pangeos is reliable numerical engines for simulation and optimization; however, there is some of limited visualization capability for model verification and presentation of results.

Pangeos is new and dynamic software. There are bugs that will require fixing. There will be new or modified functionality that will be required. Please contact [email protected].

Installation and Licensing

The use of Pangeos is controlled through a license and may be used only in accordance with a valid license agreement.

A hardware key (dongle) controls the Pangeos license. The dongle is plugged into the USB port and controls the expiry date. A valid Pangeos license enables all modules in the application; there is no demo version or additional modules available for additional cost. The number of data and grid size are restricted if the dongle is not plugged in. All license inquiries should be made to [email protected].

There following steps are required for installation of Pangeos:

1. Load the .NET Framework by going through http://windowsupdate.microsoft.com where you can install the .NET Framework version 1.1 redistributable package, plus any available language and service packs.

More information on this Windows Update can be found at http://www.microsoft.com/net/. Microsoft® .NET is a set of software technologies for connecting information, people, systems, and devices. This new generation of technology is based on Web services—small building-block applications that can connect to each other as well as to other, larger applications.

The Microsoft .NET Framework 1.1 (latest as of September 2003) includes everything you need to run applications built using the .NET Framework 1.1, including the common language runtime (CLR) and class libraries.

2. Load the Pangeos application from a distribution CD or from the website. Follow through the installation instructions and license agreement.

3. Load Ghostscript that will be used to convert PostScript graphics to more easily used formats such as jpeg. This can be loaded from the distribution CD or from the web. The standard location is http://www.cs.wisc.edu/~ghost/. It will be necessary to load GSview if you want to keep the graphics in PostScript format.

4. Connect the dongle to enable the Pangeos application. The dongle is checked when Pangeos is launched and from time-to-time during program execution. You may need to install an update.

1-2

Page 8: Guia de Usuario 1.3

The most common problems are (1) inadequate permissions – login with Administrator privileges, (2) old libraries and settings – go through all of the Microsoft® updates, (3) no USB support on certain versions of NT – get a new computer ☺

Once you successfully launch Pangeos, you should go under Tools/Options and specify how you want to view your graphics and what external editor you want to launch text files in.

1.2 Principles and Software Layout Pangeos is designed for the knowledgeable user who wants to do advanced geostatistics without concerning themselves with hundreds of data and parameter files. Pangeos is also suitable for the newcomer to learn about some advanced aspects of geostatistical simulation.

Project File Structure

All files for a project are stored underneath a directory that is specified when a project starts. About 20 directories are created under the project directory. Data files are an internal format and are gzipped by Pangeos to save space. The gzip format is much more efficient that ASCII GSLIB-like files or binary files. Each realization and variable is kept separate making it easy for Pangeos to access the requested data.

There is no reason for the user to view or change any files in the project directory. In fact, we highly recommend that you just leave those files alone. The only exception is the Private folder that will not be modified by Pangeos.

The files in the Temp directory under the project directory are just that – temporary. They can be deleted if they are taking too much space. These files are deleted periodically and when Pangeos loads the project. They can also be deleted through the Tools/Options/Memory pulldown.

The data files do not need to be in the project directory. Once the data are loaded, there is no reference to the external file where the data came from. It would be a good idea, however, to setup a “rawdata” directory at the same level as the Pangeos project directory to archive the input data.

Project files can be copied/moved on a particular PC and between PCs. The entire project can be zipped and sent by e-mail or ftp. The Open Project selection will open a browse for folder window see below.

The file structure has evolved slightly over the development of Pangeos. It is possible that certain parameter and data files from old beta-version projects cannot be loaded. Please contact [email protected] with any questions regarding old data and conversion programs that we have available.

1-3

Page 9: Guia de Usuario 1.3

Basic Project Operation

The user loads drillhole data, polygons, rock type models, and other data into Pangeos. The rudimentary visualization and statistical tools are used to verify the data and to help choose modeling parameters. Geostatistical and optimization tools are used and the output is exported to GMP software or the Microsoft® Office tools.

All illustrated above, the user can setup multiple projects. There are four in the screen capture above: Bill, Red, Testing and John. There is no flexibility to copy parameters and data between projects. Contact [email protected] for some tips and shortcuts. Parameters definition, data objects or definitions can be deleted within a project, selecting them in the Manage Project

window that opens by clicking in the button under the option Project Management in Project.

There are three tabs on the left hand side: Data, Modules and View. Projects management, objects definition and data import export options are under Data. Data Analysis, Preprocessing, Variogram Analysis, Geostatistical Modeling, Postprocessing, Grade Control and Tools are the main tools under Modules. There are multiple choices under each tool. The View tab launches a window for 2-D dynamic visualization of data, surfaces, models and polygons.

There may be multiple data and/or parameter sets associated to each data object or tool. There are three data sets loaded in the example below. The information in the main panel (lower right) relates to the data object (or parameter set) highlighted.

1-4

Page 10: Guia de Usuario 1.3

The actions available to the user are located in the central bar between the list of data objects (parameter sets) and the information window. The specific actions available depend on the context. In the case above, the options are to import new data, save (export) the data, view the data in an external editor, delete the data, or see information about the data and history. Among common other options are: run parameters , create new parameter , import parameters , accept changes , revert to save parameters , create copy , delete parameters , check parameters and zip files . An special option is fill with smart parameters , this option, although not fully implemented generates some default parameter values based on the input data selected, like for example, the name and description of the parameters, output file names, minimum and maximums or maximum distance for searches.

View

The View tab launches a window for 2-D dynamic visualization of data, surfaces, models and

polygons. Three orthogonal orientations and a slice number and tolerance can be

chosen .

Different colormaps can be chosen and edit. The minimum and maximum of data can be used as minimum and maximum for display, or specific values can be input

. Colormaps can be display inside the visualization window by

clicking in the button.

1-5

Page 11: Guia de Usuario 1.3

Some basic display options are among others: vertical exaggeration , zoom in , set

background to white , show gridlines . Images can be copy to the clipboard ;

print or a snapshot can be created .

The corresponding variable value and coordinates for the cursor location appears in the bottom of the screen.

Polygons data objects can be created in this interface . The Z coordinate (X or Y for YZ and XZ orientations) can be specified as well as the name of the polygons. Some editing of the polygons while created can be done by clicking at the Remove last point button in the polygon Editor window, see below.

Work Flow

The workflow manager under Modules/Tools permits program operations to be queued up and executed in batch. The data are passed between the different tools very straightforwardly since they are referred to by name; there are no external data files used as inputs or outputs. The sequence of steps in the left hand side is approximately from top to bottom. Of course, any specific project will not require all of the steps. We propose some of the common workflows in Section 7, but prior geostatistical experience will almost certainly be necessary to run Pangeos effectively.

Graphics

The classic PostScript graphics of GSLIB are still used in Pangeos. Histograms, probability plots, scatterplots, Q-Q plots, variogram plots, and 2-D pixel plots have been adapted to Pangeos.

1-6

Page 12: Guia de Usuario 1.3

These graphics are generated as a new Graphic tab within each tool. This image can be copied by Ctrl-C or saved for editing under the file option as any standard graphics format.

The specification of a PostScript viewer that will launch when you view a graphic is set in Tools/Options/Plotting:

1.3 Data Objects Data in generic ASCII flat files with a GSLIB-like header are imported into the software; see below. There is the standard GSLIB header with a title line, the number of variables, and a one-line description for each variable. The data that follow should be space or comma delimited with no blanks. Missing values should be flagged with a very high or very low numbers. There should be no blank lines at the end of the data file.

At present, gridded rock type or grade models can only be input in the GSLIB format, that is, a file where the value(s) for each grid node are on a line. The order of the file is X cycling fastest, then, Y, then Z. The first entry in the file is the lower left corner. The last entry is the upper right corner. The grid must be defined before importing gridded data.

Data Objects

There are a number of data objects in Pangeos. Any of the following data types can be imported into Pangeos. Of course, Pangeos will add new data entries under each type and add to the existing data entries. These data entries can be exported at any time to GSLIB-like data files or comma separated values (csv).

1-7

Page 13: Guia de Usuario 1.3

1. Drillholes: any scattered data from drillholes, blastholes, bulk sampling or any other procedure is imported as drillhole data. When a GSLIB-like data file is selected, a dialogue like the following is displayed:

The drillhole data name must be unique. A description can also be given. The data are referred to by name in subsequent tools. Pangeos keeps the X, Y, Z, coordinates and the Hole ID as special fields. The user chooses the continuous or categorical variables they want to import. Trimming limits can be set independently for each variable. The default limits are -1.0x1021 and 1.0x1021. Values below the lower trimming limit are set to missing value, as well as values above the upper trimming limit. The data name, description, and variable names are taken from the file, but can all be edited while or after the data are imported under the Edit tab.

Optionally, a default grid taking into account the extent of the drillhole data can be created automatically – required for visualization. This grid can be adjusted later.

Multiple drillhole data sets from different files can be imported, with this constrains:

• Drillhole data name will be taken from the title line. The data names have to be unique: they cannot be used by another drillhole already loaded, and cannot be identical between two files.

• All files must have the same number of variables

• All files must have identical variable names, ordered identically

A histogram or scatter plot can be created for the imported data as a check and/or preliminary analysis. The settings for these plots can be exported as a parameter file than can be loaded in the corresponding tool of the Data Analysis module.

2. Polygons: outlines of rock types, dig limits, stope boundaries are all imported as polygons. A polygon has an orientation and location. A polygon can be digitized clockwise or counter clockwise. Multiple polygons can be imported from a single file, in that case, an ID variable must be specified and each polygon must have a different ID Naming will be done according a prefix. Another variable is use to specified if the polygon is closed or open by 1 or 0, respectively.

1-8

Page 14: Guia de Usuario 1.3

Polygons can be imported from a data file or the clipboard. It can be useful to get the data points from the clipboard if programs like digxy™ are used. Polygons data objects can also be created in the view interface .

3. Surfaces: 2-D surface data that corresponds to a specified 2-D grid can be imported. The grid must be specified before the data can be imported and the data file should have the correct number of lines. There are too many data lines in the following example – a message is reported in the window.

Each surface must be loaded one at a time. Users are allowed to import multiple 2-D models, but not multiple surfaces.

4. Models/Trends: 1-D, 2-D, and/or 3-D gridded models of continuous or categorical variables can be imported. 1-D, 2-D, and 3-D grids are kept as separate data objects. Surfaces can be imported as surfaces or as 2-D grids. 2-D grade models may be imported as 2-D models or considered as a one layer 3-D grids for volume calculations or if the values are going to be regridded to more layers.

A grid must be defined before importing gridded data. The grid will not longer be editable after a gridded file has link to it.

As mentioned above, the gridded values must be imported in the GSLIB ordering convention: cycling X fastest, then Y, then Z. Multiple continuous and/or categorical variables can be imported at the same time – multiple variables in the GSLIB-like data file. Multiple realizations can also be read in at the same time. All of the realizations in a data file are read in at once. The user cannot specify a subset. Of course, you can edit the data file and delete those realizations and/or models that you do not want to import.

1-9

Page 15: Guia de Usuario 1.3

Gridded models can be exported with X, Y, and Z coordinates or and i,j,k grid index to make importing to a GMP software easier. Realizations to be exported can be selected and they can be written it to separate files.

5. Probability Distributions: non-parametric data distributions are read in as data values plus

weights. The weights represent the relative probability value to give to each data; the values need not add up to one – they will be standardized. Parametric distributions are not imported or exported at this time.

6. Experimental Variograms: are calculated within Pangeos and can be exported for plotting

or fitting in some external software. Pangeos will not import experimental variograms.

7. General: data that do not fall under one of the categories specified above are considered a general data object. The bivariate cell size versus declustered mean relationship coming out of cell declustering is considered a general data object, Ranking reports, Resource report, Source of uncertainty runs, Average within domains, etc. General data can be exported, but not imported.

1-10

Page 16: Guia de Usuario 1.3

8. Transformation Tables: are created within Pangeos each time original grade values are transformed to normal scores or to any other distribution. Transformation tables contain pairs of z and y values with z being the original data values or class bound values and y being the corresponding normal scores values. Process like normal score transformation, stepwise conditional transformation, sequential gaussian simulation generate transformation tables. They are required to back transform normal scores to original units. They can be exported, but not imported.

1.4 Internal Objects in Pangeos There are a number of other Objects in Pangeos. The user can specify any number of entries under each object. These objects cannot be imported or exported; they are for internal use only at this time. Of course, the actual parameters can be captured by an editor and printed or pasted into another application.

1. Variogram Models: variograms are all specified by name in the geostatistical calculation tools. Variograms can be entered directly by the user or they are added as the variogram fitting tool is used.

The nugget effect and variance contribution, type, ranges and angles for each structure define a variogram model. Variogram models can be interactively modified based on the experimental points under the “Edit” option.

2. Grid Specifications: the parameters that define 3-D, 2-D, and 1-D grids are internal Pangeos objects. The following is an example of a 3-D grid specification:

1-11

Page 17: Guia de Usuario 1.3

Grids in Pangeos are specified just like in GSLIB, the grid in a particular direction is defined by the center of the first block (not the model edge), the number of grid blocks, and the block size. The model minimum, maximum, and size are shown to the right. Alternatively to the block size the total number of blocks can be specified, or instead of the start the minimum and maximum.

The “Compute data coverage” option will scan through all of the drillhole data entries and compute the minimum and maximum range for X, Y, and Z. This can be a useful option if setting up a quick grid for exploratory data analysis.

Two new grid creation options are available at the bottom of the 3-D grid specification. The corresponding 2-D and 1-D grids can be generated. Trends models and surfaces can relate to these grids.

3. Categories: multiple sets of categories can be defined. In general there will be multiple rock type definitions, ore/waste codes, and perhaps measured/indicated/inferred codes:

Each rock type is specified by a name, an integer code, and a related color. These may be changed at any time. The Update button must be pressed when a change is made to a particular category; changes will be lost if another rock type is selected or another rock type is added.

4. Color Maps: multiple color maps can be specified for displaying continuous grade (or categorical values if there are many of them) values.

1-12

Page 18: Guia de Usuario 1.3

The user adds as many color ranges, as they like. The options are: Continuous, Continuous liked to values, Discrete, Discrete linked to values and Categories. The Continuous and Discrete types use the minimum and maximum or categorical values of the variable select for display. For the types linked to values the grade range is divided between a set of minimum values given by the user, otherwise the range is equally divided by chosen number of colors. For the continuous types the color extrapolated as shown. The Discrete linked to values option is useful to highlight a set of cutoff grades. The Categories type can be create from a category definition.

1.5 Tutorial One: Project Setup The purpose of this exercise is to lead users through loading some basic data and setting up Pangeos for geostatistical operations. As described above in Section 1.1, the .NET framework and Pangeos must be installed on your computer. A valid license must be setup.

1. Check that a text editor and the ghostscript program have been specified under Tools/Options. The textpad editor from www.textpad.com is a good easy-to-use editor that can handle large files. As described above in Section 1.1, ghostscript is available on the installation CD or from the web.

Pangeos will prompt you to save from time-to-time. This is pretty standard for complex applications where you could lose your work. The save reminder interval is set under Tools/Options/Project. 20-30 minutes seems reasonable. You can save the project manually at any time by using Ctrl-s or selecting Project/Save.

2. Go to Data/Drillholes and click on to load drillhole data. Find the bill-ddh.dat file on the installation CD or from www.statios.com and select that file. Load the RT as a Categorical variable and the gold and copper grade as continuous variables. There are no missing values so you do not have to explicitly set trimming limits.

Select the drillhole dataset in the upper right list and check that you have loaded 6648 data with gold grades from 0.01 to 29.72 g/t and copper grades from 0.01 to 3.18%. This information will appear in the main lower right panel when the drillhole data set is selected.

3. Go back to Objects/3D Grid Specifications and select the default grid that was created by Pangeos. Note that the default 100 by 100 by 20 grid has rather odd grid sizes and origin

1-13

Page 19: Guia de Usuario 1.3

values. Click on Compute data coverage to see the limits of the data. Reset the grid to the following since it was used for the rock type model.

Create the corresponding 2-D and 1-D grid definitions for later block averaging and trend modeling. It is convenient to create a 3-D grid for 10 m blocks with 176400 cells – that will be more practical for fast SGS in the corresponding tutorial.

4. Go to Objects/Categories and specify rock type one as code 20 with a color of your choice and rock type two as code 21 with a different color. An example input panel is shown in the preceding Section 1.4.

5. Go to Data/Models/Trends/3D and load the rock type model in bill-rtmodel.dat. Choose the rock type as a categorical model and make sure you are using the standard 5m grid definition defined above. Visualize the model in the View module to ensure that it was successfully loaded. Following is XZ slice 76. Note that different color scales were selected for the drillhole and rock type model to highlight any mismatches.

1-14

Page 20: Guia de Usuario 1.3

1-15

6. Go to Plotting/Histogram Plot and create a histogram of the gold grades in the drillhole data. You should have 6648 data with an average of 0.728 g/t.

7. Go to Plotting/2-D Slice and plot an XY slice through the scattered data. The parameters and output should appear as follows:

You have successfully launched a Pangeos project if you have completed the steps laid out above.

Page 21: Guia de Usuario 1.3

2 Data Analysis Tools There are a minimum number of essential data analysis tools in Pangeos. These tools are required to ensure that the data have been read in correctly and to check various operations that you will undertake in Pangeos. You will be required to visualize and understand your software in a general mine planning (GMP) package and, perhaps, more sophisticated statistical analysis software.

None of these tools create an output that is required by subsequent tools within Pangeos. These tools are intended to be general to permit a variety of data objects to be considered. For example, the experimental variogram data object can be entered into the scatterplot program.

2.1 Histograms This is an adaptation of histplt from GSLIB. A histogram and univariate statistical summaries are created for a PostScript display device. The output is automatically reformatted for display in common bitmap formats such as jpeg.

• Virtually any data set (drillhole, gridded model, …) can be selected for histogram calculation with a continuous or categorical variable. The rock type must also be present if you are filtering by rock type.

• The Weight can be a declustering weight, specific gravity, composite length, or thickness for 2-D data. These weights do not need to add up to one; they will be restandardized.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; the histogram is created with one set of data.

• The arithmetic/logarithmic scaling and number of classes are as expected. You can set the minimum and maximum from the data. Clicking the little arrow at the right of the Minimum and Maximum entry boxes will get the limits from the data.

2-1

Page 22: Guia de Usuario 1.3

A preview of the histogram is available under the Graphic 1 tab. The number to the right of the show button is the DPI resolution of the display. The show button will display the graphic in a new window. The figure can be captured with Ctrl-C or through the menu.

2.2 Probability Plots This is an adaptation of probplt from GSLIB. A normal or lognormal probability plot is created for a PostScript display device. The output is automatically reformatted for display in common bitmap formats such as jpeg.

• Virtually any data set (drillhole, gridded model, …) can be selected for the probability plot. The rock type must also be present if you are filtering by rock type. Clicking at the link Input data opens a Data Picker window with all data available from all the different types that can be used.

• The Weight can be a declustering weight, specific gravity, composite length, or thickness for 2-D data. These weights do not need to add up to one; they will be restandardized.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; the probability plot is created with one set of data.

2-2

Page 23: Guia de Usuario 1.3

• The arithmetic/logarithmic scaling is as expected. Just like in GSLIB, the number of labels must be specified by the user; it is not calculated automatically.

• Clicking the little arrow at the right of the Minimum and Maximum entry boxes will get the limits from the data.

A preview of the probability plot is available under the Graphic 1 tab. The number to the right of the show button is the DPI resolution of the display. The show button will display the graphic in a new window. The figure can be captured with Ctrl-C or through the menu.

2.3 Scatterplots This is an adaptation of scatplt from GSLIB. A scatterplot and basic bivariate statistical summaries are created for a PostScript display device. The output is automatically reformatted for display in common bitmap formats such as jpeg.

• Virtually any data set (drillhole, gridded model, …) can be selected for the scatter plot. The two variables have to be in the same data set. The Extract From Grid tool from Postprocessing could be used to get the values from a grid that match drillhole locations. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; the scatterplot is created with one set of data.

• The arithmetic/logarithmic scaling and limits are as expected. You can set the minimum and maximum from the data. Clicking the little arrow at the right of the Minimum and Maximum entry boxes will get the limits from the data.

• The bullet size is also self-explanatory. The statistics shown to the right of the scatterplot are optional, you can choose between: a complete statistics (total number of data, number of plotted data, mean and standard deviation for both variables, the correlation coefficient or the rank correlation coefficient), nothing or just the correlation coefficient.

A preview of the scatterplot is available under the Graphic 1 tab. The number to the right of the show button is the DPI resolution of the display. The show button will display the graphic in a new window. The figure can be captured with Ctrl-C or through the menu.

2-3

Page 24: Guia de Usuario 1.3

2.4 Q-Q Plots This is an adaptation of qpplt from GSLIB. A Q-Q plot is created for a PostScript display device. The output is automatically reformatted for display in common bitmap formats such as jpeg.

• Two data objects must be selected for the Q-Q plot. Implicitly, the two variables should have the same basic units. For example, the grades in the drillhole dataset versus the same grade variable from a simulated realization, or the same grade from two different data sources.

• The Primary variable appears on the X axis and the Secondary variable appears on the Y axis.

• Virtually any data set (drillhole, gridded model, …) can be selected for each variable in the Q-Q plot. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; the Q-Q is created with one set of data.

• The Weight for each variable will be used to construct the CDF from which the quantiles will be read. The weights could be a declustering weights, specific gravity, composite length, or thickness for 2-D data. These weights do not need to add up to one; they will be restandardized.

• You can set the minimum and maximum from the data or manually for each variable. Clicking the little arrow at the right of the Minimum and Maximum entry boxes will get the limits from the data. You really ought to set the scales the same, but the program does not require you to.

2-4

Page 25: Guia de Usuario 1.3

• The arithmetic/logarithmic scaling is as expected. You are required to have both scales arithmetic or both scales logarithmic.

A preview of the Q-Q plot is available under the Graphic 1 tab. The number to the right of the show button is the DPI resolution of the display. The show button will display the graphic in a new window. The figure can be captured with Ctrl-C or through the menu.

2.5 2-D Slice Maps This is an adaptation of pixelplt and locmap from GSLIB that will plot a slice from a 3-D model, drillhole data within some tolerance, and polygons. A 2-D slice is created for a PostScript display device. The output is automatically reformatted for display in common bitmap formats such as jpeg.

• Any subset of gridded model, drillhole, and polygon data may be used, but a grid must be defined. The grid to the right of the Display gridded data option is used when you are not displaying a grid. The grid definition is taken from the grid itself when you are displaying a grid.

• Choosing to display gridded data requires you to choose a gridded data object, a realization number, and a variable. The realization number for gridded data is specified just to the right of the grid, and appears for gridded files with multiple realizations.

• The grid orientation and slice number are required for all slices. The GSLIB conventions are used for the slice numbering, that is, from the lower bottom left.

• Choosing to display scattered data requires you to choose a drillhole data object, a tolerance for picking up data, a bullet size, and a toggle of whether or not to display the circle around the data values. The tolerance is used to limit the drillhole data that are displayed on the plot. The units are in distance units from the center of the slice. A bullet size of 1 is a nominal readable size.

• Choosing to display polygons requires you to choose the polygons that you want to display and the line thickness. A line thickness of 1 is a nominal readable thickness.

2-5

Page 26: Guia de Usuario 1.3

• The arithmetic/logarithmic and color/gray scale choices are self-explanatory. The same scale applies to both gridded data and the drillhole data: you cannot display a continuous grade from the drillhole data on top of a gridded categorical rock type model (which would actually be a nice thing to do). If categorical variable is selected, you must have the categories defined as a data object and you must choose the rock types to consider at the bottom of the Options section.

• A Y/X exaggeration factor can be specified. The factor is multiplicative for the Y-axis of the plot.

• You can set the minimum and maximum from the data. Clicking the little arrow at the right of the Minimum and Maximum entry boxes will get the limits from the data.

A preview of the slice is available under the Graphic 1 tab. The number to the right of the show button is the DPI resolution of the display. The show button will display the graphic in a new window. The figure can be captured with Ctrl-C or through the menu.

2.6 Tutorial Two: Data Analysis The purpose of this exercise is to lead users through some basic exploratory statistics steps with Pangeos. The software must be installed on your computer with a valid license.

1. Go to Data/Drillholes and load red.dat from the installation CD or www.statios.com.

Notice how the Zinc grade is picked up as the Z coordinate. Pangeos tries to identify the input variables that are correct, but you must double check. All variables are continuous except for the Rock type which is categorical, There are no missing values so you do not have to explicitly set trimming limits. The following settings are correct (note Z coordinate as elevation):

2-6

Page 27: Guia de Usuario 1.3

2. Go to Objects/Categories and create a new parameter set with category codes 20 and 21, where 20 indicates that the vein is present and 21 indicates that it is missing. In this example the colors were set to red – where the vein is present and green – where the vein is not there.

3. Go to Objects/2D Grid Specifications and edit the grid parameters to be nice round numbers. This is not really necessary, but it is a good habit.

2-7

Page 28: Guia de Usuario 1.3

4. Go to Modules/Data Analysis/2-D Slice to create a map of the drillhole locations. Create a

map of the data locations.

5. Create a histogram of the vein thickness when the vein is present.

2-8

Page 29: Guia de Usuario 1.3

2-9

6. Create a probability of the vein thickness when the vein is present – use the same scale as the histogram.

7. Create a set of scatterplots between the grades and thickness. There are N(N-1)/2 scatterplots. This makes a total of 10 since there is thickness plus four grades (N=5).

Page 30: Guia de Usuario 1.3

3 Preprocessing Tools There are a number of preprocessing tools in Pangeos. These tools rarely create final results, but the steps are required for subsequent simulation. Many of these preprocessing tools are not required for conventional kriging-based geostatistics.

The normal scores and stepwise conditional transformations are required to transform original data to the standard normal distribution required by Gaussian simulation. Despiking is useful to break the ties in datasets prior to these normal transformation schemes. Declustering methods aim at calculating representative distributions prior to simulation. Trend modeling helps to remove large-scale features that are not well reproduced by simulation.

3.1 Declustering Data are rarely collected with the goal of statistical representivity. Additional drillholes are drilled in high-grade areas. This should not be changed; they lead to the best economics and the greatest number of data in areas of the deposit that contribute the greatest metal and economic importance. There is a need, however, to adjust the histograms and summary statistics to be representative of the entire volume of interest.

Most contouring or mapping algorithms automatically correct this preferential clustering. Closely spaced data inform fewer grid nodes and, hence, receive lesser weight. Widely spaced data inform more grid nodes and, hence, receive greater weight. A map constructed by ordinary kriging is effectively declustered.

Even though modern stochastic simulation algorithms are built on the mapping algorithm of kriging, they do not correct for the impact of clustered data on the target histogram; these algorithms require a distribution model (histogram) that is representative of the entire volume being modeled. Simulation in an area with sparse data relies on the global distribution which must be representative of all areas being simulated.

Declustering techniques assign each datum a weight based on its closeness to surrounding data. Then, the histogram and summary statistics are calculated with these declustering weights. Four declustering methods are available in Pangeos:

1. Cell declustering is the most conventional declustering method that is robust in presence of a reasonably large number of data and no important geological boundaries.

2. Soft data declustering is particularly useful when there are few data, yet a secondary geological variable can be mapped and used as a trend or a soft data variable for declustering.

3. Nearest neighbor declustering is suited to cases where there are multiple rock types that are accurately modeled.

4. Declustering by kriging works in the same case as nearest neighbor declustering, but the variogram provides improved control on anisotropy within the rock types at the expense of some artifacts in the kriging weights. Declustering by kriging is available in the kriging program under Option.

The first three declustering methods are available in preprocessing.

3-1

Page 31: Guia de Usuario 1.3

Cell Declustering

The technique of cell declustering is a commonly used declustering technique. Cell declustering works as follows:

1. Divide the volume of interest into a grid of cells,

2. Count the occupied cells and the number of data in each occupied cell, and

3. Weight each data inversely proportional to the number of data falling in the same cell.

The weights are often standardized so that they sum up to the number of input data. This makes them easy to understand: a weight greater than 1.0 implies that the sample is in a sparsely sampled area and is being given more weight than average; a weight less than 1.0 implies that the sample is clustered with other samples and is being given less weight than average.

The program in Pangeos is an adaptation of declus from GSLIB. Weights are assigned to each data based on the density of data: data in densely drilled areas are given a lesser weight and data in sparsely drilled areas are given more weight.

• A drillhole dataset is required with a continuous or categorical variable. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; declustering is performed with one set of data. You may have to run this program many times. Of course, we suggest that declustering is a geometric consideration and it is often a good idea to decluster with all of the data not filtering by rock type. Similarly one set of declustering weights can be use for multiple variables that have been equally sampled. Different declustering weights should be calculated when there is unequal sampling.

• You can run the program for a number of different cell sizes between a minimum and maximum size. The cell size should be about the drillhole spacing in sparsely drilled areas.

• Only one set of declustering weights are chosen. There are three options: keep the cell size with the minimum declustered mean, the maximum declustered mean, or you can override these automatic selections to get your choice by specify a cell size. Choose a cell size so that there is approximately one datum per cell in the sparsely sampled areas.

• The anisotropy factors scale the reference grid size in the X direction. The factors are multiplicative, that is, an anisotropy less than 1 will reduce the cell size in the specified Y or Z direction. This anisotropy should be chosen based on the anisotropy of the data configuration, like for example areal drillhole spacing versus sample intervals; it is not necessarily linked to the geologic uncertainty.

The declustering weights are added to the drillhole dataset (the variable prefix can be specified by the user) and the summary results (cell size versus declustered mean) are added as a general data type. The associated graphics include the cross plot of the declustered mean versus cell size and the histograms with and without weights.

3-2

Page 32: Guia de Usuario 1.3

In the previous example, cell declustering was run for 50 cell sizes, ranging from 10 to 400 meters. The selected size should probably range between 100 and 150 meters, where the desclustered mean is minimum.

Soft Data Declustering

Conventional cell declustering is inadequate to determine a representative distribution unless there is adequate data coverage in areas with high and low grades. In this situations, if there is a secondary variable with better coverage, like remote sensing or geological mapping, together with a calibration relationship between the secondary variable and the grade under consideration, declustering can be perform based on this secondary data.

• A drillhole dataset is required with a continuous or categorical variable.

• A calibration relationship is required. You must specify the primary variable (the one you are really interested in) and the secondary variable (the one that is also gridded everywhere and considered representative).

• A gridded data object or a general distribution (transformation table) is required. The distribution of the secondary variable in this file is considered representative. You must select this grid and the variable in it.

• The program works by calculating conditional distributions of the primary for a series of secondary data classes. You must also specify the number of secondary data classes to consider. There should be a minimum of 20 data per class.

The representative distribution is added as a Probability Distribution data object. This can be used in simulation and other subsequent calculations.

The following is an example of soft data declustering using a smooth kriging map of thickness as the secondary variable. The trend values at the drillhole locations were obtained through the Extract From Grid option in the Postprocessing module.

3-3

Page 33: Guia de Usuario 1.3

Nearest Neighbor Declustering

The most intuitive approach to declustering is to base the weights on the volume of influence of each sample. This procedure assigns weights based on how many grid cells (within the right rock type) are close to each data value. A data value in a sparsely sampled area will inform more cells and will receive more weight. Is equivalent to do kriging with a maximum number of samples of one, the kriging weights are the number of close grid cells assigns to each sample within a search radius.

• A drillhole dataset is required with a continuous or categorical variable. The rock type must also be present if you are filtering by rock type, which you better be if you are running this program.

• After selecting to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program considers all of the rock types and only assigns cells to data when they are of the same rock type.

• You should have a gridded rock type model that will be used to allocate the grid cells to data.

• The grid cells will be allocated to data only when the center of the grid cell is within some minimum distance to the data location. This search distance should be rather large and use to control areas of no drilling. This distance does not consider any anisotropy.

This program accounts for the geologic model where cell declustering does not. There are some features of this algorithm that should be considered. First, if the data are close together (along strings) some data can receive no weight and others arguably receive too much. Second, the end data on strings of data can also receive a too-large weight. Finally, no anisotropy or variogram

3-4

Page 34: Guia de Usuario 1.3

measure of continuity is used in the calculation of the weights.

The declustering weights are added to the drillhole dataset (the variable prefix can be specified by the user). Subsequent programs will standardize the weights as required.

The kriging program, through the option declustering, can be setup to calculate similar weights by accumulating the kriging weights that come from estimating a grid. The weights from kriging account for the variogram and more complex search considerations. The string effect of kriging is a minor problem.

3.2 Despiking and Transformation The data could be despiked and transformed within the simulation algorithm, but it is useful to transform them ahead of time for exploratory statistics and variogram calculation.

Despiking

Despiking is recommended before Gaussian transformation (either normal scores transformation or the stepwise conditional transformation) when there are data with the same value (zeros, detection limit, or due to significant digits). For example, there would be no unique transformation to any target distribution if there are 10% original zero values. There is a need to break the ties prior to transformation.

The simplest approach is to break the ties randomly. Often, the ordering of ties is left to the sorting algorithm or the order the values are encountered in the data file. This method is used within the simulation program if the despiking program is not run and the option Transform the Data is selected. It should be noted that there is no need for despiking with indicator-based methods.

More sophisticated random despiking would consist of adding a small random number to each tie and then ordering from smallest to largest. In either case, random despiking could have an effect on subsequent modeling. In particular, the variogram calculated in the transformed data space could show a too-high nugget effect and unrealistic short-scale variations. This more elaborate despiking is particularly important in the multivariate stepwise transformation; it is of little practical importance in the single variable normal scores transformation case.

The procedure proposed by Verly in 1984 is used in Pangeos. The idea is to compute local averages within local neighborhoods centered at each tied data value. The data are then ordered or despiked according to the local averages; high local averages rank higher. The user must choose the size of the window. Clearly, if the window is too large or too small then no despiking will be done. In general the window around each data should include at least three other data, otherwise the ties will not be broken. There is little benefit to study the window size in great detail; the despiking results are relatively robust with respect to the window size.

• A drillhole dataset is required with a continuous grade variable. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; despiking is performed with one set of data. You may have to run this program many times. Of course, we suggest that despiking does not really change the data values and it is a good idea to despike with all of the data not filtering by rock type.

3-5

Page 35: Guia de Usuario 1.3

• Ties (spikes) are broken by considering the average within some isotropic radius. You specify that radius. A distance 3-4 times the data spacing seems to work well.

You will likely have to run despiking for each variable in a multivariate setting; the program does not automatically consider multiple variables.

The despiked grades are added to the drillhole dataset (the variable prefix can be specified by the user). The values are not fundamentally different from the original grades: there are just some minor random deviations added to break the ties.

Normal Scores Transformation

It is often necessary to transform data from the original histogram to the congenial normal/Gaussian distribution for subsequent geostatistical analysis. In data transformation, the ordering of the data is preserved, that is, high values before transformation remain high after transformation. The rank-preserving quantile transform or graphical transformation is implemented in GSLIB and many other software.

The program in Pangeos is an adaptation of nscore from GSLIB. The specified grade is transformed to a standard normal distribution.

• A drillhole dataset is required with a continuous grade variable. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; normal scores transformation is performed with one set of data. You may have to run this program many times. A missing value is assigned to the samples of the other rock types; this is useful, as in the transformation for the next rock type you can save the transformed values in the same variable by ticking the option Update only non-missing values.

• You can transform the grade using a declustering weight in the drillhole dataset or according to a reference distribution that has been specified as a Probability Distribution data object.

• In general you should not be transforming the data with either declustering weights or a reference distribution. This is because these results are commonly used for variogram calculation; using weights for transformation makes it more difficult to interpret the calculated variogram points, that is, the sill will not be close to one.

3-6

Page 36: Guia de Usuario 1.3

The normal score transform is added as a variable (the prefix can be specified by the user) to the drillhole data object.

Stepwise Transformation

The cosimulation of multiple grades would normally require a relatively complex linear model of coregionalization fitting all direct and cross variograms between the multiple variables. Alternatives such as collocated cokriging are possible, but there remain strong assumptions regarding multiGaussianity and the ability of the correlation coefficients to capture all multivariate relationships.

The Stepwise Conditional Transformation procedure transforms multiple variables to be multivariate Gaussian and independent. This greatly simplifies cosimulation; the variables are transformed, independent Gaussian simulation is performed, and back-transformation imparts the correct multivariate correlation to the simulated values. The stepwise-conditional transformation procedure was introduced by Rosenblatt in 1952. The method was not quickly adopted because of data and computer requirements. Modern geostatistics removes these limitations and the stepwise procedure has become widely adopted in practice.

• A drillhole dataset is required with multiple grade variables. The rock type must also be present if you are filtering by rock type.

• If there is no reference multivariate distribution, declustering weights from the data should be used if available.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program loop over the chosen rock types.

• The program is setup to transform up to four variables. The order matters, the first variable in the stepwise order must be on top, then the second, and so on. We do not recommend transform more than three variables at a time. The transformation of more variables can be performed in groups or sets of two or three variables. The first variable can be left untransformed.

• The stepwise transformation proceeds from conditional distributions. You specify the number of classes for the stepwise transformation. Too few classes could cause features of the multivariate distribution not to be reproduced. Too many classes could lead to too

3-7

Page 37: Guia de Usuario 1.3

few data in each class. There really ought to be 10-20 data in the classes with the fewest data.

• The minimum number of data in a class helps if you choose too many classes. The class is dynamically expanded until the minimum number of data is found.

• A different multivariate data distribution can be used as a reference, for example, if you have smoothed the multivariate distribution or if you have some a more complete reference distribution in another source. If a reference distribution is used declustering weights for this distribution should be consider instead of declustering weights from the data. The rock type should also be present if you are filtering by rock type. The rock types to consider are the same selected for the input dataset.

Transformed variables are added to the drillhole dataset. The transformation table is stored internally as a Transformation Table and made available to you when you reverse the stepwise transformation.

3.3 Trend Modeling Natural phenomena exhibit trends. There are many geological reasons for such large-scale trends. Mineral grades exhibit partially structured and partially random variations. A limitation of geostatistics is that the variability is considered stationary. This assumption entails that spatial statistics do not depend on location; for example, the mean and variogram are constant and relevant to the entire study area. Special efforts must be made to account for trends or so-called non-stationarities. Constraints have been added to kriging in an attempt to capture trends. Kriging with a trend model permits a trend of polygonal shape to be fitted within each local kriging neighborhood. Kriging with an external drift permits an independent variable to define the shape of the local trend surface. Experience has shown that these forms of kriging work when there are adequate data within each local neighborhood to define the trend. Ordinary kriging is robust with respect to trends, but simulation is very sensitive. Constructing a trend model may be important.

Any discussion on trends is subjective. All geostatisticians struggle with the separation of deterministic features, like trends, and stochastic correlated features, which are left to simulation. This separation is truly a decision of the modeler since deposits are deterministic; a “trend” and “residual” are meaningless if the deposit is known exactly. Trends are necessarily dependent on

3-8

Page 38: Guia de Usuario 1.3

the available data. Most of the variation appears stochastic in presence of sparse data; as more data becomes available, the more refined the trend model and less of the intrinsic variation is left to geostatistical modeling.

Certain trends may be inferred from general geological knowledge. In some situations the presence of a trend is observed through the data. There is an apparent contradiction when many drillholes are available. Abundant data permit the trend to be identified, but it is less important since the data impose their trend features as local conditioning data on the resulting model; therefore, explicitly modeling a trend may be considered less important. Trends are only reproduced in the immediate vicinity of the drillholes.

Automatic fitting of a trend within the kriging formalism only works in presence of many data. A better approach is to define a deterministic trend model by hand contouring or guided machine contouring. It is common to model the vertical trend first and then map areal trends. There is no unique way to merge a trend map with a trend curve. A simple and practical approach is to scale the vertical trend curve to the correct areal average. At each location the vertical trend is multiplied by the corresponding areal trend at that location divided by the global average of the variable being considered.

The previous approach is use in Pangeos to model 3-D trends and residuals calculation. The trend modeling tool in Pangeos is undergoing a major rewrite that permits much flexibility. This documentation will be superceded.

• A drillhole dataset is required with a grade or categorical variable. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; trend modeling is performed with one set of data. You may have to construct a number of trend models. In SGS program a trend model for each rock type can be input.

• You must specify a 3-D grid for the trend model. This should be the grid size you are planning on using for simulation.

• You can choose to model a vertical trend. If you do, you will need a maximum vertical distance to establish the trend at a particular Z slice. You must also specify the minimum and maximum number of data. No trend will be modeled if there is fewer than the minimum number of data. The closest data (to the central Z slice) are used up to the maximum.

• You can choose to model an areal trend. If you do, you will need a maximum isotropic areal distance to establish the trend at a particular X/Y location. You must also specify the minimum and maximum number of data. No trend will be modeled if there is fewer than the minimum number of data; the missing value is assigned to the cell grid. The closest data (to each X/Y location) are used up to the maximum. The data are vertically averaged before being used to construct the areal trend; this is done in the program.

• The merging of the 1-D and 2-D trend into a 3-D trend requires an overall average. This average can be taken from the data or you can specify the value you want.

An inverse distance algorithm is used here to establish the averages to construct the trend. An alternative is to use block kriging with the kriging program to build the trend model.

3-9

Page 39: Guia de Usuario 1.3

A gridded trend model is created. For continuous variables this is a local mean, for categorical variables is the expected probability of each rock type. The trend and/or the residual values can also be added as variables to the drillhole dataset.

The residuals at data locations may be estimated or simulated at all locations and added to the mean, the trend values could be treated as a secondary variable. The trend model is input in Kriging by selecting the Type: Local varying means and then selecting Secondary Variables under the Setting section. In SGS select the option Consider Locally Varying Means and then in under Setting select Locally Varying Means, the input for the model and variable will appear to the right.

3.4 Category Model From Polygons Categorical rock type models are often input to Pangeos from other software where wireframe modeling or some other technique is used to distinguish geologic rock types. Within Pangeos, 3-D categorical model can be created from polygons and/or surfaces. The categories could represent geological rock types or other variables such as stopes or mineralogical boundaries. 2-D and 1-D grids can also be tagged by polygons.

• A 3-D category model is created - you must specify the grid resolution for the model. You can always rescale this model; however, it is a good idea to choose the smallest simulation grid as the starting grid.

• The set of categories must also be specified; you must define the set of categories before you can construct a 3-D model of them. This helps with the menu choices.

• The 3-D category model is initialized at a default category; you must specify which one. The category model is then built in a number of steps - you define each step.

• The steps operate sequentially - the order matters. You can recode cells that have been coded in preceding steps.

• Each step operates with a polygon or surface. The parameters for each step:

o One code will be assigned in each step - you must choose the code for the step. You can, of course, use the same code over and over again for different polygons or surfaces.

o You choose a polygon or surface to consider for coding. The polygon will be closed if it is not. The surface will only apply at locations where it is not missing. The polygons and/or surfaces have a specific orientation that is saved with them when they are loaded.

o The cells inside/below are coded with the selected value or the cells outside/above are coded. You must choose.

o You must also choose the (+/-) tolerance for polygons.

Note that you add polygons by clicking on the plus sign above the specifications. You can delete any one of them by clicking on the minus sign .

• The simplest way to setup a complex rock type model is with some other GMP software using wireframes, then import that model into Pangeos

3-10

Page 40: Guia de Usuario 1.3

The result of the program is a 3-D gridded categorical variable that can be used for simulation, kriging, and reporting. You give the name of the model and the name of the categorical variable you are creating.

3.5 Local Probability Model From a Categorical Model The local probability model is constructed by calculating the proportion of each rock type within a specified window size. Those proportions are associated to probabilities and can be used in sequential indicator simulation.

To improve SIS using simple IK, a local probability distribution function (pdf) can be used for each simulated point rather than the global pdf. This is advantageous for situations where sampling data is sparse. Using a local pdf with simple IK gives the same primary advantage as ordinary IK – the flexibility and robustness to account of a locally varying pdf, that is, accounts for the difference in proportions of the indicators at each point. A global pdf may lead to situations where a rock type appears in an area that is unreasonable. For example, if there is no data to suggest a rock type occurs in a given area but its presence in the global pdf allows it to be drawn during simulation.

To determine the probabilities at each location a previously generated deterministic model may be used. The deterministic model is not perfect, but the proportions of different indicators are approximately correct in each neighbourhood; at the very least the proportions for each region will be better than the overall global proportions.

Extracting the local probabilities for each point is quite simple. A spherical search window is used to read all nearby points in the deterministic model; these points give the local proportions. Using this method a local pdf can be easily determined for every point. Any shape of search window could be used, to account for anisotropy in the geologic bodies.

• A gridded rock type model is required with a categorical variable. The categories have to be defined as a data object from which you must choose the rock types to consider.

• The search window is specified through ranges and rotation angles for the three axes, this allow for anistropic searches.

• A new variable with the local proportions obtained is added to the categorical model.

3-11

Page 41: Guia de Usuario 1.3

3.6 Tutorial Three: Preprocessing The purpose of this exercise is to lead users through some basic preprocessing steps with Pangeos. The software must be installed on your computer with a valid license.

1. Using the drillhole data Bill create a crossplot of Au and Cu in logarithmic scale.

The stripes on the cross plot illustrates the need for despiking. You will need to run despiking twice: once for gold and once for copper. There is no need to perform this by rock type.

The cross plot to the right shows a cross plot of the despiked copper grade versus the original copper grade.

3-12

Page 42: Guia de Usuario 1.3

2. Run cell declustering for all data for gold, for 50 cell sizes from a minimum of 10 to 200 meters. The result is quite typical and is due to a large scale trend. The lower grades on the margin of the deposit produce that the declustered mean continue to decrease. The drillhole spacing is on the order of 50 to 75 meters; the cell size should be around that size.

Declustering and despiking can be done in any order.

3. The nearest neighbor declustering (or declustering by kriging) is more robust in a case like this. Run nearest neighbor declustering with the parameters shown below. Note that the data are kept within the correct rock types. The overall average drops from 0.728 to 0.709 g/t Au.

Declustering by kriging can also be tried.

As a side note, this is a difficult declustering problem. The low grades on the margin areally tend to bring the global mean down. The high grades at the bottom of the data set (yes, the data was cut from a more complete dataset) tend to increase the declustered mean.

4. Run normal scores transformation for gold and copper in each rock type. Normal score transformation outside SGS is useful for variogram calculation, in this sense; the transformation should not be done with declustering weights. SGS program can transform and back transform the data within its execution. The following shows the normal scores parameters for gold in rock type one, the same parameter is used for rock type two

3-13

Page 43: Guia de Usuario 1.3

transformation, changing the rock type category and selecting the option Update only non-missing values. Remember to use the despiked grade for the transformation.

5. Run the stepwise transformation program for two variables. In this case we choose the declustering weights from the nearest neighbor declustering.

Run the cross plots of the transformed values and compare it with the cross plots of normal scores transformed values or the original values. The last two are correlated, but the stepwise transforms have no correlation. The stepwise transformation is useful to remove correlation between the variables for independent simulation.

3-14

Page 44: Guia de Usuario 1.3

6. Perform a full 3-D trend modeling for gold and copper for each rock type independently, if

the rock types are going to be simulated separately. An example of the parameters for gold in rock type two is shown below. Note the distances and number of data. The number of data in the areal trend case is the number of areal locations with drillholes (include all elevations); it is not the number of raw drillhole data – the minimum and maximum numbers may need to be reduced.

The trend modeling should be done with the normal score transformed variable or the stepwise transformed variable depending if cosimulation or independent simulation are perform later.

3-15

Page 45: Guia de Usuario 1.3

3-16

There is a significant areal and vertical trend for gold in RT2, as shown in slice XZ - 65. As mentioned briefly above, this data was cut from a larger dataset so the bottom of the data ends in ore, which would be a concern in a real study. Our goal here is to understand the trend and use it in subsequent geostatistical simulation.

For gold in RT1 there is also an areal trend as shown in Slice XY- 42.

Page 46: Guia de Usuario 1.3

4 Variogram Analysis Tools An essential aspect of geostatistical modeling is to establish quantitative measures of spatial correlation to be used for subsequent estimation and simulation. The variogram is the most common measure of spatial correlation; most geostatistical estimation and simulation algorithms require a variogram model.

If data transformation is required in the following estimation or simulation, this should be done prior to any calculation. The use of Gaussian techniques requires a prior normal score transformation of the data and the variogram of those transformed data. Indicator techniques require the appropriate coding of data.

4.1 Variogram Volume Calculation Directions of continuity are most often known from geological interpretation or preliminary contouring of data. When they are not clear, a variogram map or volume is useful for establishing directions of anisotropy. Dense drillhole data permit calculation of variogram volumes from experimental data. The variogram map takes the idea of calculating the variogram in a number of directions to its logical extreme. The variogram is calculated for a large number of directions and distances; then the variogram values are posted on a map where the center of the map is the lag distance of zero.

This is an adaptation of varmap from GSLIB. The main use of the variogram map is to detect the major and minor directions of continuity. Variograms maps can be calculated with either gridded or scattered data.

• A drillhole dataset or a gridded model is required with a grade or categorical variable. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; variogram calculation is performed with one set of data.

• You can compute any number of variograms. Each variogram requires the following parameters:

o The first variable for the variogram calculation. The paired variable need not be the same as the first variable if you are calculating a cross variogram or cross covariance.

o The type of variogram must also be chosen. The types are the standard ones given in GSLIB (see next section for details). You will be asked for a threshold/category if you ask for an indicator variogram.

o You must name the variogram. Each variogram map/volume will be saved in the same gridded data object with a different name.

4-1

Page 47: Guia de Usuario 1.3

Note that you add variograms by clicking on the plus sign above the variogram specifications. You can delete any one of them by clicking on the minus sign at the far right.

• You must specify the size of the variogram map/volume by giving the number of lags and the lag size. Setting the number of lags to one cause that coordinate to be ignored in the calculations. Note that the lags do not overlap, which can lead to noisy variogram maps if you choose this too small or to overly smoothed variogram maps if you choose it too big (see next section for considerations of how to pick this parameters).

• Each point on the variogram volume must have the minimum number of pairs; otherwise, the value is set to a missing value code.

• The option to standardize the sill is self-explanatory. The variogram values are divided through by the variance. This is not applied for all variogram types.

• The name at the bottom of the parameter set is the name of the 3-D model. The names mentioned above (at the right of each variogram) are the variables within the 3-D model. A 3D grid specification is also created.

A gridded model is created with the experimental variogram volume. Only slices through the origin (middle slice=total number of slices divided by two plus one) are easily interpretable - you should probably not browse through all of the slices seeking variogram enlightenment. The variogram map in the XY orientation is display under the Graphic 1 tab.

4.2 Directional Variogram Calculation Once the directions for variogram calculation have been chosen according to geological knowledge or inferred from some variogram maps directional variogram can be calculated.

4-2

Page 48: Guia de Usuario 1.3

This option is an adaptation of gamv and gam from GSLIB. Any number of direct and cross variograms in any number of directions are calculated. These experimental variograms are chosen by name in the variogram-fitting tool, which also creates plots.

• A drillhole dataset is required input. The rock type must also be present if you are filtering by rock type.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not consider multiple rock types; variogram calculation is performed with one set of data.

• You can compute any number of variograms. Each variogram requires the following parameters:

o The first variable for the variogram calculation. The paired variable need not be the same as the first variable if you are calculating a cross variogram or cross covariance.

o The type of variogram must also be chosen. The types are the standard ones given in GSLIB, that is, traditional semi-variogram, traditional cross semi-variogram, covariance, correlogram, general relative semi-variogram, pairwise relative semi-variogram, semi-variogram logarithms, semi-madogram, continuous indicator semi-variogram and categorical indicator semi-variogram. You will be asked for a threshold/category if you ask for an indicator variogram.

o You must name the variogram. By clicking the button to the right of the input for the name, a name is proposed based on the variables and the variogram type. Each variogram will be saved as a different experimental variogram object. Note that each variogram name will ultimately be the merger of the variogram name prefix at the bottom of the window, the variogram name, and the direction.

Note that you add variograms by clicking on the plus sign above the variogram specifications. You can delete any one of them by clicking on the minus sign at the far right.

• The option to standardize the sill is self-explanatory. The variogram values are divided through by the variance. This is not applied for all variogram types.

• You can compute variograms in any number of directions. Each direction requires the following parameters (the conventions are the same as GSLIB):

• The azimuth direction (measured clockwise from north-south), the azimuth tolerance, and the horizontal bandwidth or maximum deviation from the azimuth direction. The tolerance is a half-angle tolerance in degrees; a value of 90° implies an omnidirectional horizontal variogram.

• The dip direction (measured in negative degrees down from horizontal), the dip tolerance, and the vertical bandwidth or maximum deviation from the dip direction. The tolerance is a half-angle tolerance in degrees.

• Three parameters define the lag spacing for each direction. The lag distance (or length), the lag tolerance, and the number of lags. Note that this is different from

4-3

Page 49: Guia de Usuario 1.3

GSLIB where you have one lag definition for all directions. So you can calculate the vertical and horizontal directions in one pass.

• Variograms are calculated in three orthogonal directions, once the principal direction has been chosen the two orthogonal directions can be computed by clicking at the Auto-fill main directions and the Ok button in the Automatic angles window. Alternatively you

can choose the orthogonal directions interactively in the directions editor window.

• Or you can click in the Auto-fill range of directions option and create a set of directions with the same parameters for a number of azimuth or dip angles. The 360° degrees are divide by the number of angles to be consider, the angle increment can be set automatically by clicking in the little arrow to the right of this parameter or selected by the user to any value.

4-4

Page 50: Guia de Usuario 1.3

• In a regular grid, the directions are specified by giving the number of nodes that must be shifted to move from a node on the grid to the nearest node that lies along the directional vector. No lag tolerances are allowed. Some examples for a cubic grid:

• You must name each direction. By clicking the button a name is proposed based on the chosen direction. As mentioned above, the direction name constitutes part of the final variogram name.

• The name at the bottom of the parameter set is the name of the 3-D model. The names mentioned above (at the right of each variogram) are the variables within the 3-D model.

A number of experimental variograms are added as objects. They can be plotted and used in the variogram fitting tool.

4-5

Page 51: Guia de Usuario 1.3

From the different types of variogram-type that could be calculated, the theory behind kriging and simulation requires the use of either the covariance or the semi-variogram. Some alternative variogram types, more robust in the presence of outliers values, clustering, sampling error or sparse data, are only acceptable to help identify the range of correlation and anisotropy.

Careful selection of variogram calculation parameters is critical for obtaining a clean, interpretable, sample variogram; here are some recommendations:

Plot data locations and contour data values prior to choosing any parameters; this allow prior detection of data spacing and clusters, trends, discontinuities, and other features.

Start with an omnidirectional variogram, often yield the best well-behaved variogram, and then calculate the three principal orthogonal directions. To calculate an omnidirectional variogram the azimuth and dip angle tolerance should be set to 90°.

The lag separation distance is usually chosen to coincide with the data spacing. The distance tolerance is often chosen at one half of the unit lag distance. When there are many data the tolerance could be reduce, for example to a one quarter of the unit lag distance or can be increase to three quarters in the presence of few data. The number of lags is chosen so that the total distance is about one half of the deposit size in the direction being considered.

The dip angle tolerance is needed when drillholes are not truly vertical, in this case a tolerance of 10 to 20 degrees is often used. The dip angle tolerance should be smaller

4-6

Page 52: Guia de Usuario 1.3

than the horizontal angle tolerance, say 5° compare to 22.5°, due to the much larger variability in the vertical direction.

The bandwidth parameter is sometimes used to limit the maximum deviation from the vertical and/or horizontal directions. It should be considered with deviated drillholes. It is set to a large value for omnidirectional variogram calculation or when there are few data. For directional variograms in presence of sufficient data, it can be set small, say, to one to three times the unit lag distance.

4.3 Variogram Fitting The calculated variogram points cannot be use directly, a licit variogram measure is need it for all distances and directions, noisy results should be discounted and geological interpretation should be used in the construction of the final variogram model. Variogram interpretation and modeling is important.

Experimental variograms are simultaneously fit and plotted in 1, 2, or 3 dimensions. Cyclicity and zonal anisotropy can be selected for particular directions. The program will fit experimental variogram data objects calculated with the Directional Variogram Calculation tool.

• The three angles specify the principal direction of continuity: azimuth, dip and plunge. These angles are not used in any calculations, but are simply passed into the output models so that the anisotropy is correct. The azimuth angle rotates the original Y-axis (principal direction) in the horizontal plane in degrees clockwise. The dip angle rotates the principal direction in negative degrees down from horizontal. The plunge angle rotates the minor horizontal direction counterclockwise relative to the principal direction. The minor and vertical directions are orthogonal to the direction specify by the three angles.

• You can turn on/off any of the three directions. When a direction is turned on, you will need to specify the experimental variogram and a name for that variogram. The name will appear on the plot. You can load the files with the Principal, Perpendicular and Vertical directions, their names and rotation angles by clicking in the Get direction set button.

• There are flags for zonal anisotropy and cyclicity. Checking one of these will add a variogram structure. You cannot check all of them. The program will allow zonal anisotropy in up to two directions and cyclicity in one direction.

• You can fix the total sill and/or the nugget effect if you wish. These are the most common places where you intervene in the fitting process.

• The weighting by distance (inverse distance) and number of pairs can be used to improve the fit at close distances and to improve the fit to experimental variogram points that have more data. You will have to experiment to see how they affect the fit.

• You can omit experimental variogram points when there are less than a minimum number of pairs. The data for the experimental variogram can be viewed under Data/Experimental Variograms.

• The Variogram Structures box at the bottom of the input panel permits you to take more control of the variogram fitting.

4-7

Page 53: Guia de Usuario 1.3

o The plus/minus sign changes the number of structures.

o You can choose to fix the structure types, the sills, and the ranges in the three principal directions. You must set the value for all nested structures when you choose to freeze one.

• The output name at the bottom refers to the resulting variogram model. You will choose this variogram by name in subsequent programs.

The resulting 3-D variogram model is added as a Variogram Model data object. A graphic with the experimental and fitted model is also created.

The sill is the most important variogram parameter that should be fixed. When trends, zonal anisotropies, and/or cyclicity are present, automatically determining the sill does not work very well and is inefficient. However, in most cases the user will know what the sill is supposed to be before any attempts are made to model the experimental variograms. For all simulation algorithms the sill need to be fixed to the theoretical variance. This is not a requirement in kriging and an apparent sill greater or lower than the expected sill may be consider as an indication of trends and/or zonal anisotropies.

Since there is a drastic difference in the shape between certain structure types the program may get “stuck” with one particular type and not be able to change to a better structure type. Choosing the type of structure is a simple task for the user and in some cases will give better results.

In all real depositional cases the range of correlation depends on direction; is common that the vertical range of correlation is less than the horizontal range due to larger lateral distance of deposition. Zonal anisotropy is a limit case where the range of correlation in one direction exceeds the field size, which leads to a directional variogram that appears not to reach the sill or variance.

4-8

Page 54: Guia de Usuario 1.3

Geological phenomenon often occurs repetitively over geologic time leading to repetitive or cyclic variations in the facies and petrophysical properties. This imparts a cyclic behavior to the variogram— that is, the variogram will show positive correlation going to negative correlation at the length scale of the geologic cycles going to positive correlation and so on.

Since the resulting 3-D variogram model is a Variogram Model data object it can be edited modifying any of the parameters under Data/Objects or in the Output tab:

The Edit button opens a new window were variogram models can be modified interactively by clicking in the graph and sliding the fitted model in the three principal directions. If you click and slide within the graph you will edit the contribution of the current structure. If you click within the graph and slide holding the Ctrl key down, you will edit the contribution and the range. If you click and slide below the distance axis, you will edit the range only. If you click to the left of the variogram axis, you will edit the nugget effect. You can also add or eliminate structures by clicking on the plus sign or minus sign respectively. The model is modified in the three principal directions simultaneously, to accept the changes you must click the button. In this example the three directions are equal because is an omnidirectional variogram model.

4-9

Page 55: Guia de Usuario 1.3

4.4 Average Variogram Calculation Average variogram or gamma bar values are required for volume support corrections including histogram and variogram scaling.

The gamma bar values represents the variogram mean value when one extremity of the vector h describes the domain v(h) and the other extremity independently describes a larger domain V(h). Typically v(h) corresponds to the composite scale and V(h) some selective mining unit (SMU). The variance of the composites within a block size is given by the gamma bar value.

Although there exist certain analytical solutions, we systematically estimate the average variogram by discretizing the block volume into a number of points and simply averaging the variogram values. This is the method used in Pangeos.

• The volume must be a rectangular parallelepiped oriented with the X/Y/Z grid system. You specify the size of the volume in these three directions. You could set a size to zero for a 2-D or 1-D domain.

• A variogram model must be selected from the output models of variogram fitting or entering the parameters in the Objects/Variogram models window.

The result will be saved as a general data type and it will also be displayed in a dialogue box.

4-10

Page 56: Guia de Usuario 1.3

4.5 Histogram Scaling The change of volume support from a quasi-point scale as

The variance decreases as the volume increases due to the “averaging out” of high and low values. This tool corrects the histogram for a different selective mining unit (SMU) volume support. You can choose the affine, indirect lognormal, or discrete Gaussian model change of shape model.

• An input drillhole dataset can be selected for histogram scaling. The rock type must also be present if you are filtering by rock type.

• The Weight can be a declustering weight, specific gravity, composite length, or thickness for 2-D data. These weights do not need to add up to one; they will be restandardized.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; the histogram is scaled for all of the data in all of the rock types you chose.

• The average variogram values at the starting and target scale specify the change in variance for the program. The average variogram at the starting scale will be zero if you associate the data scale to a point scale and the data variogram to the point variogram. The target scale average variogram value is calculated from the Average Variogram Value program with your target block size and chosen variogram model.

• You must choose one of the three classic change of shape models. It is common to choose affine for a vein-type mineralisation and the discrete Gaussian model for a disseminated mineralisation. The indirect lognormal correction is a compromise.

This program generates a new variable for the drillhole dataset, which is the variance corrected variable. A new distribution is also added.

4.6 Variogram Scaling Variograms are often calculated and fit at the point composite data scale. This program calculates the variogram model representative of a larger block scale. The theoretical variogram model at a larger scale is useful to compare to another data source. For example: blast holes, panel grades, etc.

• An input variogram model at some arbitrary scale must be selected.

• The input data scale and the target scales must be selected. The input scale is often 0,0,0 if it is being associated to a point scale. The target scale is self-explanatory.

The scaled variogram model is added as a Variogram Model data object.

4-11

Page 57: Guia de Usuario 1.3

4.7 Variogram Model Computation This is an adaptation of vmodel from GSLIB. The programs calculate the variogram values in a certain direction of a variogram model for a given number of lags and lag distances.

• A variogram model is required.

• The number of lags, lag distance, azimuth and dip must be selected for the direction of interest.

The result is saved in an experimental type of data.

4.8 Variogram Plotting This is an adaptation of vargplt from GSLIB. Different experimental variograms and models can be plot together for the analysis of continuity in different directions for example.

• Experimental variograms and models can be added to a plot by clicking on the plus sign . The type of line and color can be selected.

• The variogram and distance limits can be obtain from the data minimum and maximum, or specified by the user. The sill can be display at an specific value.

4.9 Tutorial Four: Variogram Calculation and Modeling The purpose of this exercise is to lead users through anisotropy detection, directional variogram calculation and variogram modeling with Pangeos. The software must be installed on your computer with a valid license.

1. Using the drillhole data Bill run variogram maps to help detect anisotropy directions. For this example we used the despiked normal scores gold variable in rock type two.

The following setup can be used:

4-12

Page 58: Guia de Usuario 1.3

Only the XY slice is display in the Graphic 1 tab. You can use the 2-D Slice tools from Data Analysis to visualize the central slice in the other directions.

Choose to Display Gridded Data and select the name of the variogram volume calculated, select the slice orientation.

The central slice corresponds to the number of lags plus one. You need to reset the minimum and maximum of data to display; in this case we chose 0 and 1.5 respectively.

The XY anisotropy is quite clear. The YZ anisotropy looks different at close and far distances. The anisotropy on XZ is not strong. But since there is an anisotropy in the azimuth direction the directions in the planes are not the true angles. To look at the correct slide orientation, data should be rotated in the azimuth angle (this option still under development). In this case data was rotated 25°, the variogram maps in the YZ and XZ orientation are shown below.

4-13

Page 59: Guia de Usuario 1.3

This anisotropy is for the gold in rock type two. You should also calculate variogram maps for rock type one as well as for copper.

2. Calculate directional variograms. From the variogram maps above, we chose the principal direction for gold in rock type two as an azimuth of 25° and dip 65°, the horizontal perpendicular direction has an azimuth of 115° and dip 0°. The vertical has azimuth of 25° and dip -25°. Copper in rock type two show the same anisotropy. Gold and copper in rock type one has a principal direction of anisotropy of 45° azimuth and dip 65°. Use the following setup in case you need help with the rest of the parameters.

3. Fit the three directional variograms calculated using the Fitting Variogram tool. Try

fixing different parameters until you are satisfied with the resultant model or interactively modified the model in the Variogram Editor window by clicking the Edit button in the Output tab.

4-14

Page 60: Guia de Usuario 1.3

Take care in the variogram fitting parameters. The directions are not automatically passed to the fitting program; you have to enter the angles. Note the azimuth of 45 and dip 65 in the parameters above.

4. Calculate directional variograms and fit them for gold in rock type two and for copper in both rock types. If at this point you still want to practice some more with variograms, you can also calculate for the stepwise transform copper variable, there is no need to do it for gold if you use it as the first variable in the transformation, it will be same as the normal scores variable. You could also calculate the variograms for the residual variable created at the trend modeling.

4-15

Page 61: Guia de Usuario 1.3

4-16

Page 62: Guia de Usuario 1.3

5 Geostatistical Modeling Tools A ubiquitous problem in earth sciences is creating a map of a regionalized variable from limited sample data. Pangeos modeling tools include kriging, sequential indication simulation and sequential gaussian simulation.

5.1 Kriging Kriging is the traditional mapping tool use for estimation in industry and embedded in simulation methods.

Kriging is a procedure for constructing a minimum error variance linear estimate at a location where the true value is unknown. Kriging estimator is unbiased. Kriging is locally accurate but smooth, appropriate for visualizing trends but inappropriate for process evaluation where extreme values are important. How ever, have the advantage of provides a quantification of how smooth the estimates are. Kriging accounts for (1) the proximity of the data to the location being estimate and (2) the clustering of the data and (3) the structural continuity of the variable being considered.

There are several versions of kriging that make different assumptions regarding the mean. Simple kriging does not constrain the weights and works with the residual from the mean, assume to be constant and known, for a geologically- interpreted trend is the preferred approach. Ordinary kriging estimates a constant mean value from the data used in the kriging neighborhood. Ordinary kriging has seen wide application in map making because it is robust with respect to trends. There is a need, however, to have sufficient data to reliably estimate the mean at each location. Kriging with a trend model considers that the mean is unknown and that it has a more complex trend of polynomial shape but unknown coefficients, that are fitted from the data. Kriging with an external drift considers a non-parametric trend shape that comes from a secondary variable. Cokriging is kriging with different types of data.

There are two ways to handle an exhaustively sampled secondary variable: kriging with an external drift and cokriging. Kriging with an external drift only uses the secondary variable to inform the shape of the primary variable mean surface, whereas cokriging uses the secondary variable to its full potential. The kriging version in Pangeos is an adaptation of kt3d from GSLIB. The program will decluster, cross validate, jackknife, or estimate point/block values on a grid with inverse distance or a variety of kriging methods. The program will krige with hard/soft boundaries.

• There are two main options in the General box of parameters:

o Option specifies whether you what to estimate a grid, cross validate with the input drillhole data, jackknife with a new set of locations, or accumulate the declustering weights. The program will only do one at a time.

o Type specifies the estimation type. The same procedure will be applied in all rock types of the model. The techniques are inverse distance, simple kriging, ordinary kriging, locally varying means, external drift and collocated cokriging. Kriging with a trend, also known as, universal kriging can be performed under ordinary kriging by specifying the drift parameters.

5-1

Page 63: Guia de Usuario 1.3

• Output model will be created. The rock type, estimate, and estimation variance will be saved for every block. Increasing the debug level will create more information on the estimates. The output will get written to the dll output tab when Pangeos is in debug mode.

• Kriging grid & input data parameters:

o The grid must be defined in the data objects before kriging. The categorical variable model (if used) and any secondary data model (if used) must be at this grid resolution. You may need to rescale those secondary models.

o The drillhole ID variable is required in the search (for the maximum per drillhole) and in cross validation for removing the entire drillhole when re-estimating a data location.

o The variable being kriged must be present in the input data.

o The secondary variable is required for the options locally varying means and external drift, where the secondary variable is needed at the data locations. For collocated cokriging is optional. Often a gridded model of the secondary variable is also required.

o The category in the input data is required when estimating by rock type.

• Categories parameters: you have the option of setting up different parameters within each rock type. You must have a categorical model at the scale of the grid being estimated. You must also have defined the categories as a data object. Then, you choose the categories or rock types that you want to krige into.

The category logic matrix is used to specify hard and soft geological boundaries. Each row corresponds to the drillhole data in a specific category. Each column corresponds to a grid block in a specific category. If a i/j location is checked then a data of that row type is used to estimate a block of that column type.

• The Drift parameters available in ordinary kriging are used for universal kriging (AKA kriging with a trend). The options are linear or quadratic in the X, Y and/or Z, or cross quadratic in the XY, XZ and/or ZY. In general, universal kriging is of little benefit in interpolation and dangerous in extrapolation.

• Search parameters:

o The location is not estimated when less than the minimum number of data is found. The closest maximum number of data is used subject to the other search constraints.

o The octant search is as in GSLIB - regular X/Y/Z octants are used and you must specify a maximum per octant. In general, you do not need the octant search if you set the maximum per drillhole.

o The maximum from other categories is used when soft boundaries are used between categories or rock types (see Categories parameters above). This limits the number of data from other categories and forces the search to pick up data from the same rock type, even if they are farther away.

5-2

Page 64: Guia de Usuario 1.3

o The search ranges and angles are as in GSLIB - angles operate as an azimuth, dip, and plunge and the search radii are in the rotated directions. Setting the search to the variogram range is standard practice, and is a good idea in simple kriging. The search can be set larger for ordinary kriging (to get a better implicit estimate of the mean) or smaller to rely on stationarity less. It can also be based on the data spacing: smaller search radius (less than the variogram range) can be used for closer grid spacing, and a larger search radius (than the variogram range) could be used for widely spaced data.

• The Settings parameters is where you enter the variograms for each rock type, the simple kriging means (if needed), the correlation coefficients by category if collocated cokriging (if needed), and the secondary variable gridded models (if needed). The panel to the right must be filled out when an option is chosen at the left.

The results will be added to the input drillhole data set for declustering, cross validation, or jackknifing. A 3-D grid with estimate and estimation variance may be created.

You can stop the process at any time by clicking at .

In general, kriging is use as a mapping algorithm to show global trends or to provide a local estimate to be further processed. In the first case a large number of samples, hence, a large search radius should be used to provide a smooth continuous surface. In the second case one may have to limit the number of samples within the local search neighborhood.

When using an octant search, the maximum number of samples should be set larger than normal, since octant searches in the presence of regularly space data aligned with the center of the blocks, can cause artifacts.

In cross validation mode the conditioning data are deleted one at a time and re-estimated from the remaining neighboring data. If the drill hole variable (should be specified when importing the drill hole data) is available the entire drill hole of the selected sample will be remove in turn, otherwise it will do it sample by sample. The latest can give an unrealistic sense of goodness in the model. The output of cross validation is a cross plot of the estimate value and the true value (data values) under Graphic 1 tab, also the estimate value, the kriging variance and the error (estimate-true) are add to the drill hole dataset as variables. Cross validation is a simple way of checking the goodness of the modeling parameters and critical flaws. The term jackknife applies to resampling without replacement, that is, when an alternative set of data values is re-estimated from another non-overlapping data set. A jackknife data set will be required to use this option. The output is similar to one of cross validation.

5-3

Page 65: Guia de Usuario 1.3

5-4

Page 66: Guia de Usuario 1.3

5.2 Sequential Indicator Simulation The key idea behind the indicator formalism is to code all of the data in a common format, that is, as probability values. The indicator approach can be used for continuous data variables or categorical data. Pangeos implementation of sequential indicator simulation considers categorical variables.

The aim of the indicator formalism for categorical variables is to directly estimate the distribution of uncertainty in the categorical facies variable. All hard data are coded as discrete zeros and ones. The indicator coding for a particular facies type k’ is the probability that facies k’ is present, which is the probability density function by definition. The probability of facies k’ at an unsampled location can be estimated by kriging. Indicator variograms must be calculated for each facies.

Sequential indicator simulation consists of visiting each grid node in a random order. At each grid node find nearby data and previously simulated grid nodes, construct the conditional distribution by kriging, that is calculate the probability of each facies being present at the current location, and draw a simulated facies from the set of probabilities. If necessary order relation corrections are applied after kriging and before draw from the conditional distribution; the estimated probabilities must meet the requirements to be non-negative and sum to one.

This is an adaptation of sisim from GSLIB for categorical variables.

• There are two main options in the General box of parameters:

o You can specify the random seed number so that you get the same realizations each time you run the program. Alternatively, the seed number can be taken as some function of the time on your computer and, effectively, use a random seed.

o The number of realizations is self-explanatory.

• Output model will be created if it does not exist. If exist you can chose to replace all realizations or to append to existing realizations.

Replacing the realizations is a good idea when you are fine tuning your simulation parameters - you do not want to keep some initial realizations. You can append to existing realizations to get more realizations with the same parameters or additional realizations with slightly different parameters (perhaps you want realizations coming from different scenarios).

• Simulation grid & input data parameters:

o The grid must be defined in the data objects before simulation.

o You can generate an unconditional realization by unchecking the Conditional option.

o The input drillhole data should contain the variable you want to simulate.

o The variable to simulate is selected from the list of all categorical variables associated to the chosen drillhole dataset.

• Categories parameters: you have the option of setting up different parameters within each category. You must have defined the categories as a data object. The proportion of each category can be local or global. In the case they are local a gridded file with the

5-5

Page 67: Guia de Usuario 1.3

probabilities of each facies must be specified, like for example the local probability model construct from a category model. Variogram models are needed for each category.

• Search parameters:

o The closest maximum number of data needs to be specified. The data are assigned to the grid nodes. You have no choice. You may need to simulate a larger grid to include the data and not relocate them from outside the grid. In fact, data outside of the grid will not be relocated onto the grid.

o The search ranges and angles are as in GSLIB - angles operate as an azimuth, dip, and plunge and the search radii are in the rotated directions. Setting the search to the variogram range is standard practice. Search is common across all categories; the idea to set a global search to the maximum search radius among the different rock types. The search/choosing of samples is rock type specific as the measure of distance depends on the variogram of each category.

o The multiple grid search is a good idea to reproduce the long range variogram structure. There may be cases where it is not a good idea, but you should usually have this option turned on.

o The octant search is as in GSLIB - regular X/Y/Z octants are used and you must specify a maximum per octant. In general, you do not need the octant search if you set the maximum per drillhole.

The output is multiple gridded realizations of multiple categorical models.

You can stop the process at any time by clicking at .

The statistics (mean, variance, shape of the distribution, variogram, etc.) of each realization will fluctuate around the model statistics. This fluctuation will be greater if the domain being simulated has either a small number of nodes or is smaller than the range of correlation. Indicator simulations tend to present more fluctuations than Gaussian simulation. Fluctuations can be reduce by postprocessing the realization using the Realization Histogram Correction option of the Postprocessing module, so the histograms and variograms are honored more closely.

5.3 Sequential Gaussian Simulation Sequential Gaussian simulation has become the most popular algorithm for continuous property modelling. The popularity of this algorithm is due to its relative simplicity and effectiveness at creating numerical models with correct spatial statistics.

Simulation honors local data, reproduce the histogram and variogram and assessment of global uncertainty is possible. Kriged estimates are too smooth and inappropriate for most engineering applications, simulation corrects for this smoothness and ensures that the variogram/covariance is honored. A key idea of sequential simulation is to use previously simulated values as data so that the covariance between all of the simulated values is correct.

Pangeos implementation of sequential gaussian simulation simultaneously simulates multiple variables within multiple rock types. This is an adaptation of sgsim from GSLIB.

• There are two main options in the General box of parameters:

5-6

Page 68: Guia de Usuario 1.3

o You can specify the random seed number so that you get the same realizations each time you run the program. Alternatively, the seed number can be taken as some function of the time on your computer and, effectively, use a random seed.

o The number of realizations is self-explanatory.

• Output model will be created if it does not exist. The rock type and simulated values will be saved for every block.

Replacing the realizations is a good idea when you are fine tuning your simulation parameters - you do not want to keep some initial realizations. You can append to existing realizations to get more realizations with the same parameters or additional realizations with slightly different parameters (perhaps you want realizations coming from different scenarios).

• Simulation grid & input data parameters:

o The grid must be defined in the data objects before simulation. The categorical variable model (if used) and any secondary data model (if used) must be at this grid resolution. You may need to rescale those secondary models.

o You can generate an unconditional realization by unchecking the Conditional option.

o The input drillhole data should contain the variables you want to simulate, the declustering weight (if you are using a type of declustering that attaches a weight to each data), and the category/rock type.

o The variables to simulate are selected from the list of all variables associated to the chosen drillhole dataset.

• Categories parameters: you have the option of setting up different parameters within each category. You must have a categorical model at the scale of the grid being estimated. You must also have defined the categories as a data object. Then, you choose the categories or rock types that you want to simulate. A single realization or multiple realization of the categorical model can be used.

The category logic matrix is used to specify hard and soft geological boundaries. Each row corresponds to the drillhole data in a specific category. Each column corresponds to a grid block in a specific category. If a i/j location is checked then a data of that row type is used to simulate a block of that column type.

• The first option is the type of cosimulation (if you select more than one variable). The options are clear: independent, collocated - requires only the matrix of correlation coefficients between the variables (they can be specified by category), and full cokriging - requires all of the direct and cross variograms following a valid linear model of coregionalization.

There are four more binary options: (1) transform (or not) the variable - usually you transform the data unless you are using stepwise or normal scores transformed values, (2) consider (or not) ordinary kriging - usually you do not use ordinary kriging since it does not work well for simulation, if the local mean vary significantly over the domain ordinary kriging might be considered, (3) consider (or not) self healing, which will dynamically correct the simulation to ensure that you get the exact right histogram, and (4) consider (or not) locally varying mean values.

5-7

Page 69: Guia de Usuario 1.3

For the Transform the data option the upper and lower tails are extrapolated linearly, by default the maximum and minimum of data are used, but a different maximum and minimum can be specify for the transformation under Settings.

• Search parameters:

o The data are assigned to the grid nodes. You have no choice. You may need to simulate a larger grid to include the data and not relocate them from outside the grid. In fact, data outside of the grid will not be relocated onto the grid. The closest maximum number of data needs to be specified.

o The search ranges and angles are as in GSLIB - angles operate as an azimuth, dip, and plunge and the search radii are in the rotated directions. Setting the search to the variogram range is standard practice. The search can be set larger for ordinary kriging (to get a better implicit estimate of the mean) or smaller to rely on stationarity less. Search is common across all categories; the idea to set a global search to the maximum search radius among the different rock types. The search/choosing of samples is rock type specific as the measure of distance depends on the variogram of each category. Sample 1 is closer than sample 2 to the location to estimate in each rock type.

o

RT1 RT2

xx

x

x1

1

2

2

Maximum search radius

γ Search

RT1 RT2

xx

x

x1

1

2

2

Maximum search radius

γ Search

o The multiple grid search is a good idea to reproduce the long range variogram

structure. There may be cases where it is not a good idea, but you should usually have this option turned on.

o The octant search is as in GSLIB - regular X/Y/Z octants are used and you must specify a maximum per octant. In general, you do not need the octant search if you set the maximum per drillhole.

• The Settings parameters is where you enter the variograms for each rock type, the minimum and maximum value (the tail interpolation, if transforming the data variable, is assumed linear and it can be specified by category and avriable), reference distributions can be specified (if the result of declustering is a distribution entry and not a weight), the locally variable means are specified by rock type and variable (if needed), and the secondary data variable (if needed for cokriging). The panel to the right must be filled out when an option is chosen at the left.

5-8

Page 70: Guia de Usuario 1.3

• Cosimulation parameters must be set if you are performing some type of cokriging. The matrix of cross correlation coefficients are used for simulation. For now the implementation consider global coefficients for more than one rock type.

• The Secondary variable model file is specified to the right of the cosimulation parameters if you have chosen to use a secondary variable under the options.

The output is multiple gridded realizations of multiple grades with rock type.

You can stop the process at any time by clicking at .

The statistics (mean, variance, shape of the distribution, variogram, etc.) of each realization will fluctuate around the model statistics. This fluctuation will be greater if the domain being simulated has either a small number of nodes or is smaller than the range of correlation. Fluctuations can be reduce so the histograms and variograms are honored more closely during simulation by selecting the option Consider Self Healing or later using the postprocessing tool: Realization Histogram Correction in the Postprocessing module. Self Healing consider a dynamic correction of the kriging variance besides the resetting at the end of the resultant distribution to match a target histogram as done in the postprocessing tools.

A multiple grid search is useful to reproduce long-range spatial structures beyond the search radius, such as involved in zonal anisotropy. A coarse grid is simulated first and used to conditioning a second finer grid and then a third one; a three level search is hard coded in Pangeos.

5-9

Page 71: Guia de Usuario 1.3

5-10

Page 72: Guia de Usuario 1.3

5.4 Tutorial Five: Geostatistical Modeling The purpose of this exercise is to lead users through the different options of the geostatistical modeling tools in Pangeos. The software must be installed on your computer with a valid license.

1. Load the drillhole data john-ddh.dat. You can find the john-ddh.dat file on the installation CD or from www.statios.com. This dataset is fashioned after a real example with one variable (copper) distributed in 2-D for grade control.

Load the drillhole data: the Cu grade is loaded as a continuous variable. There are no missing values so there is no need for the trimming limits. The program should load 329 data. Let the program create a default grid and then make the coordinates “nice”. A 2-D grid is created because the data are only two dimensional.

The data will appear as follows in the view window:

2. Calculate and display the Cu histogram and probability plot. The vertical jumps on the probability plot indicates the need for despiking. Despiking will make the normal scores transform show slightly less randomness. Run despiking.

Cell declustering is not really required with the regular spaced blastholes. The result will be influenced too much by the blastholes around the edges.

5-11

Page 73: Guia de Usuario 1.3

3. The normal scores transform is needed to get the data in the right units for variograms. Run the normal scores transform on the despiked copper grade.

4. The variograms of the variable being kriged/simulated is required. You need the variograms of normal scores for conventional SGS. You need the variograms of the raw data for kriging.

Run variogram maps to help detect anisotropy directions. The following setup can be used. You should visualize the central slice in each direction.

There is no strong anisotropy; perhaps a little in the -30 to -40 degree direction. Run directional variograms for fitting.

5-12

Page 74: Guia de Usuario 1.3

The automatically fitted directional variograms are shown to the left below.

5. Create ten sequential Gaussian simulations. The 2-D simulations should run quite fast. There

is nothing particularly difficult with the SGS setup, 12 data and an isotropic search of 75m was used for this example.

5-13

Page 75: Guia de Usuario 1.3

Following are the first four realizations.

6. The simulated realizations can be averaged using the Analyze Multiple Realizations tool in

the Postprocessing module, and compared to kriging. There is no requirement that the match each other, but the differences should be relatively minor. Following is the result with kriging using only 6 data. The results are smoother when more data are used. The match to the average of all the realizations is good.

5-14

Page 76: Guia de Usuario 1.3

5-15

Page 77: Guia de Usuario 1.3

6 Postprocessing Tools

6.1 Accuracy Plots In the context of evaluating the goodness of a probabilistic model, a probability distribution it says to be accurate if the fraction of true values falling in the p interval exceeds p for all p in [0,1]. The precision of an accurate probability distribution is measured by the closeness of the fractions of true values to p for all p in [0,1]. A graphical way to check the accuracy and precision of a model is to cross plot the actual fraction of times a true value falls in the p interval versus p, if the points fall on the 45°-line the probability distributions are accurate and precise. Points that fall above the line are accurate but not precise, while points that fall below the line are neither accurate nor precise. In the first case the distributions of uncertainty are too wide in the second case they are too narrow.

This tool is used to cross validate the distributions of uncertainty that come out of a multivariate Gaussian model. The true values are the normal scores transform of the grade. The distributions are specified by the mean (estimate) and variance coming out of the cross validation option in the kriging program.

• The drillhole data selected should include the normal scores of the true grades and the results of cross validation, estimate and kriging variance.

• The size of the probability interval can be chose under Probability increment.

A preview of the accuracy plot is available under the Graphic 1 tab. The probability interval is plot against the fraction of values that falls within the corresponding interval.

6.2 Models Merging This tool merges grade models using a categorical model. The categorical model can be a single deterministic model or a set of realizations from sequential indicator simulation. Similarly the grade models can be a single realization from kriging for example, or a multiple realization simulation.

• A gridded categorical model is required with a categorical variable. The categories have to be defined as a data object from which you must choose the rock types to consider.

• For each category a grade model need to be specify or a constant or missing value for all blocks within the category. The options: From model or Set Value can be apply to all categories by clicking at .

• There are three options for the output model(s). To extract only the diagonal realizations; the first categorical realization with the first grade realization, the second with the second

6-1

Page 78: Guia de Usuario 1.3

and so on. Or to generate separate models for each category model realization and/or each grade model realization.

1 L1

L

L+1realizations

L+1realizations

determ

inistic

Cate

gorica

l

Grade

1 L1

L

L+1realizations

L+1realizations

determ

inistic

Cate

gorica

l

Grade

6.3 Extract from Grid This tool extracts gridded values at the locations of drillhole data; it considers the block with the closest center coordinates. This is useful for checking a 3-D model.

• A drillhole dataset is required; no variable is needed only the locations are used.

• Multiple realizations will be considered in the gridded data.

Additional variables are added to the drillhole dataset; the user specifies the variable prefix. If different categories exist in the model, categorical variables are also created.

6-2

Page 79: Guia de Usuario 1.3

6.4 Average within Domains Domains may be defined that represent years of production, stopes, mining phases, rock type and so on. This program will average the grades or any variable from drillhole data or gridded models with multiple realizations within the specified domains.

• The input data can be drill hole type data or a single realization from a gridded model.

• Domains are specified by any categorical or integer type variable.

• Tons variable can be used to weight the average by specific gravity or other type of weight, like declustering weight.

A general data entry is created with the averages. The averages are also shown under Result 1 tab, from where results can be printed or import into Microsoft Word.

6.5 Model Operations This tool manipulates any data object with basic arithmetic and logical operations. Any number of operations may be performed and any number of intermediate variables can be saved.

6-3

Page 80: Guia de Usuario 1.3

• The basic logic of each operation is C = A ⊗ B, where A, B, and C are data objects. The structure of A dictates what B can be - a model of the same size or the same drillhole dataset. The variable created as C can be used as an intermediate variable or saved as a variable in the A dataset. ⊗ is an operation of some kind.

• The mathematical operations include:

o Assigning a constant value, assign an existing variable

o Basic arithmetic between two values (+,-,x,/)

o Basic exponentiation/logarithms (raise a variable to a power, natural logarithm, logarithm with base 10, 10 to a power, exponentiate)

o Basic trigonometric functions (sine, cosine, tangent)

o Absolute value

o Basic logic (assign 0 or 1, if greater than, greater equal then, lower than, lower equal than, equal or not equal)

o Clip or trim values

o Draw from a uniform distribution

• This program provides the basic functionality required. You cannot completely avoid writing some kind of program or script.

Additional variables are added to the dataset.

6.6 Model Rescaling This tool will upscale or downscale gridded realizations of categorical or continuous variables. This is particularly useful for calculation of recoverable reserves at different block sizes.

• You must choose a variable from a set of input 3-D model(s) to be rescaled. The input grid definition is associated to the gridded data object and you do not need to specify what it is. It will be displayed in the lower gray information window.

• The model(s) will be transformed to the output grid definition. The output grid definition may be smaller or larger than the input grid definition. The output grid does not need to be aligned with the input grid; blocks outlines can overlap.

• The variable may be continuous or categorical. A volume-weighted average of the continuous variable is performed (the average will be mass-weighted if you choose a specific gravity model). The most common categorical variable is kept.

• A block in the output grid is set to missing if more than 50% of its volume is missing in the input grid.

• The specific gravity model is optional. If you choose it then a mass-weighted average will be used.

A new gridded model data object is added at the target grid.

6-4

Page 81: Guia de Usuario 1.3

6.7 Categorical Model Cleaning One concern with cell-based categorical realizations is the presence of short scale variations/noise. These noisy features often appear geologically unrealistic and may have an impact in predicted resources.

A second concern is that the rock type proportions of sequential indicator simulations often depart significantly from their target (input) proportions. Typically, small proportions (5-10%) are poorly matched, with the simulated proportion systematically too high or too low.

The key idea behind this tool is that the rock type at each location is replaced by the most probable rock type based on a local neighborhood. The probability of each rock type is based on (1) closeness to the location, (2) whether or not the value is a conditioning datum, and (3) mismatch from the global target proportion. The rock type with the maximum probability is selected; no random drawing is considered. Each node is considered independently of all others; the algorithm is not sequential. Note that this algorithm does not ensure exact reproduction of the target proportions, but target proportions will be more closely reproduce. In general, variability in rock type proportions is an inherent aspect of uncertainty and perfect reproduction of target proportions is not a critical goal.

• A gridded file with the categorical model to be clean is required with a categorical variable.

• If you choose to filter by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; transformation is performed with one set of data. You may have to run this program many times.

• The target proportions for each category have to be input or they can be obtained from the categorical model. The little arrow at the right of the entry boxes adjusts the proportion to sum up to one.

6-5

Page 82: Guia de Usuario 1.3

• An smoothing window has to be specified for the X,Y and Z directions. These are half-window sizes, e.g., a size of 4 in X implies that 9 cells will be considered in the X direction. The size of the smoothing window is directly proportional to the “cleanliness” of the results. The resultant categorical model will appear smoother/cleaner when a larger smoothing window is used.

• A conditioning data set can be used if the Drillhole Data option is selected. If so, a drillhole dataset a categorical variable must be input. The rock type must also be present if you are filtering by rock type.

A new variable is added to the categorical model.

6.8 Realization Histogram Correction The histogram of simulated realizations may not match the input target histogram within acceptable tolerances. This tool will correct the histograms.

The transformation method is a generalization of the quantile transformation used for normal scores. The p quantile of the original distribution is transformed to the p quantile of the target distribution.

Values at data locations can be honored. In this case the quantile transformation is applied progressively as the location gets further away from the set of data location. The distance measure is proportional to a kriging variance at the location of the value being transformed. That kriging variance is zero at the data locations (hence no transformation is applied) and increases away from the data. Because not all original values are transformed, reproduction of the target histogram is only approximate. A control parameter allows the desire degree of approximation to be achieved at the cost of generating discontinuities around data locations; the greater the value of the control parameter the lesser the discontinuities.

• A gridded model is required with a continuous or categorical variable. The rock type must also be present if you are filtering by rock type.

• If you choose to transform by rock type, you must have the categories defined as a data object and you must choose the rock types to consider. This program does not loop over the chosen rock types; transformation is performed with one set of data. You may have to run this program many times.

• A target distribution must be specified. Target histograms within each rock type must be specified if transforming by rock type.

• An input kriging variance file and variable must be provided if the option Honor local data is selected.

A new model will be created with a variable with the corrected grades.

6.9 Reverse Stepwise Transformation The stepwise conditional transformation is uniquely powerful in removing complex correlation between multiple variables permitting independent simulation of the transformed values. This tool reverses the transformation for gridded simulation results.

6-6

Page 83: Guia de Usuario 1.3

• You must specify the input variables/models. They should be standard normal variables, on the same grid system, and have the same number of realizations. The values can only be back transformed as a set.

• The gray information window toward the bottom of the parameter window will help you double check the input grids.

• The transformation table coming from the stepwise transformation program is required.

• The categorical variable specification must also be present if you are filtering by rock type. If you choose to filter by rock type, you must have the categories defined as a data object.

The back transformed variables are saved in a gridded model with the variable prefix that you choose.

6.10 Recoverable Reserves Calculation This tool calculates the probability of ore, the grade of ore, and the expected grade from multiple realizations.

• Multiple realizations of a continuous (or categorical) variable are required.

• The cutoff is used applied the variable to calculate the average grade above cutoff and the probability of being above cutoff.

The output gridded data object has three variables added: the average grade, the probability to be above cutoff, and the grade above cutoff.

6.11 Analyze Multiple Realizations This tool is a simplification of postsim from GSLIB. It has the option to calculate the E-type estimates, that is, the point-by-point average of a set of realizations, and/or to compute the variance of the conditional distribution.

• A gridded model with multiple realizations of a continuous or categorical variable is required.

• The average or variance can be calculated for which you can specify the name of the output variables

The output is a new model with the same grid specification.

6-7

Page 84: Guia de Usuario 1.3

6.12 Source of Uncertainty This tool allows identifying what fraction of the joint uncertainty is due to the categorical rock type model and which is due to the grade modelling.

The tons of ore, grade of ore and quantity of metal are calculated for a certain cut off grade for a set of realizations, these values are calculated for all combinations between N grade model realizations and M category model realizations. For example for 100 grade realizations and 100 category realizations there is 10,000 combinations. The variance of these 10,000 values corresponds to the joint uncertainty. This joint uncertainty has components due to the variability of the geology as well as the grade modeling. To assess the uncertainty associated with the rock type modeling, the variance of all categorical realizations is calculated while fixing the grade model, later the variance associated to each grade realization can be average. Similarly the uncertainty in the grade modeling can be obtained.

• Gridded models with multiple realizations are required for the grade and the categorical variables.

• The corresponding categories have to be defined as a data object. Grade models and different cut off grade can be specified independently for each rock type, or a constant (missing) value can be assigned.

• Gridded models with deterministic models for grade (the output of kriging for example) and for the rock type model (the rock types codes assign from wire frames) are optional. They can be used by ticking the option Use deterministic models for grade and/or category models

• Either all realizations or a subset can be process for the grade or the category models.

• The grade units and the tonnage equivalent to a block are required to calculate the tons of ore and the quantity of metal.

The output is three files of general type. The Full Table file corresponding to the tons of ore, grade of ore and quantity of metal for each one of the combinations of grade and category realizations. The Grade Uncertainty file consists of the mean and standard deviations of the tons of ore, grade of ore and quantity of metal overall categorical realizations for each grade realization. Equivalent the Category Uncertainty file contains the means and standard deviations of all grade realizations for each category realization.

Only the results are save in the form of tables, none of the 3D models use for the calculations are saved.

6-8

Page 85: Guia de Usuario 1.3

6.13 Realization Ranking Multiple realizations are ranked by ore tones, ore grade, and metal content. The metal content can be discounted by bench (Z-slice) to favor realizations with higher grade early in the mine life.

• Multiple realizations of a continuous (or categorical) variable are required.

• The cutoff is used applied the variable to calculate the average grade above cutoff and the ore tones above cutoff.

• The discount rate is used for the quantity of metal ranking. It is really setup for an open pit setting. The metal on the top bench is not discounted. Each bench thereafter is discounted by (1+rate)bench, where bench is 0 at the top, 1 for the one below, and so on. The rate is entered as a percent and converted to a fraction in the program.

A general data object is added that has all of the ranking measures, the ranks, and the overall average ranks.

6.14 Resource Classification The uncertainty in grades is quantified by multiple realizations. This program categorizes that uncertainty for the purposes of resource classification.

6-9

Page 86: Guia de Usuario 1.3

• Multiple grades realizations coming from the SGS program or loaded into Pangeos from GSLIB or some other application. You will need to select the grade variable to consider and the rock type if you are going to classify differently in different rock types.

• A variogram is needed to scale the local distributions to different volume supports. The input realizations represent the block-wise uncertainty. Each block will be scaled up to the volume specified for the measured and inferred classification.

• If you choose by-rock-type classification, you will need to select the rock types to consider and the variogram for each rock type.

• All blocks not meeting the measured and indicated criteria are called inferred. The measured and indicated criteria are specified by three parameters:

o Volume or time period of production specified by a 3-D regular volume. The block-wise distributions will be corrected to this volume scale; the realizations are not averaged up to the larger scale. One reasonable choice would be a monthly scale for measured and a quarterly scale for indicated. Another possibility would be a quarterly scale for measured and a yearly scale for indicated. The reporting codes require you to document clearly your choice.

o Allowed deviation or precision for the classification. These are the +/- limits that you will require the grade to be within a significant fraction of the time. 15% is reasonable. Some would choose 15% for measured and 50% for indicated.

o Probability to be in limits or accuracy for the classification. This is the probability or fraction of times that you will require the scaled block grades to be within the specified precision to meet the specified criteria. 80% is a reasonable number. Some may prefer 90% or even lowering this to 50% for indicated.

• The best approach is to setting up a classification scheme is to fix two of the parameters and change the other one for measured/indicated.

For example, in the exploration phase you could set the volume to be a year’s production volume, set the allowed deviation to 15%, and set the probability to be within at 80% for measured and 50% for indicated.

In production, you could set a monthly volume for measured, a quarterly volume for indicated, an allowed deviation of 15%, and a probability to be within of 80%.

There are many site-specific considerations. This program gives you the flexibility to choose a variety of schemes.

• You must choose one of the three classic change of shape models. It is common to choose affine for a vein-type mineralization and the discrete Gaussian model for a disseminated mineralization. The indirect lognormal correction is a compromise.

The output model is created or updated with summaries for the volume variance corrected measured criteria; you will have to turn off volume variance correction for measured to get the uncorrected results.

The classification contains code 1 (measured), 2 (indicated) or 3 (inferred).

6-10

Page 87: Guia de Usuario 1.3

6.15 Reporting within Categories This tool reports the uncertainty within a categorical model. The categorical model could represent geologic rock types, resource classification (measured, indicated, and inferred), time periods, benches, or stopes.

• A gridded categorical variable model is required. These specify which blocks belong to which category for creating the summary report. You select the variable within the 3-D gridded model that represents your categorical variable of interest.

• You must also have the categories defined as a data object. These could represent any geologic or other variable.

• You must specify the input profit (for reporting above a cutoff) and grades model. The grid definition for this model should be the same as the categorical variable model.

• The gray information window toward the bottom of the parameter window will help you double check the input grids.

• A 3-D specific gravity model can be specified for the tonnage calculation. A default of 2.65 is used if no specific gravity model is considered.

• The cutoff applies to the variable you choose to call profit. It could be a profit or a grade variable.

The summary will contain the tones and grade above cutoff in the different categories.

• You can specify any number of variables.

The back transformed variables are saved in a gridded model with the variable prefix that you choose.

A general data object is added with the results.

6-11

Page 88: Guia de Usuario 1.3

6.16 Tutorial Six: Postprocessing The purpose of this exercise is to show some of the tools from the Postprocessing module in Pangeos. The software must be installed on your computer with a valid license.

For this exercise we will use the simulations create using the drillhole data john-ddh.dat. You can find the john-ddh.dat file on the installation CD or from www.statios.com. To generate the 3D model see the tutorial at the end of chapter 5.

1. First we need to block averaged up to a coarser scale the 1m grade simulations. A 5m grid is reasonable given the nominal 10m blasthole spacing. A grid size of 1/3 to 2/3 of the blasthole spacing is reasonable. Following are the parameters and a slice through the first realization at a 5m square resolution.

6-12

Page 89: Guia de Usuario 1.3

2. Perform a recoverable reserves calculation for a cutoff grade of 0.85% Cu. And look at the probability of been ore and the average grade above cutoff maps.

3. Average all the realizations of the rescaled model using the Analyze Multiple Realizations tool.

4. Rank the rescaled realizations considering a cutoff grade of 0.85%Cu.

6-13

Page 90: Guia de Usuario 1.3

The ranking report with results can be found under General/Data, here they have been format to the following table:

Realization Number

Number of Ore Blocks

Ore Block

Ranking

Average Ore

Grade

Ore Grade

Ranking

Metal Content

Metal Content Ranking

Overall Average Ranking

1 812 3 1.76 8 1431.7 5 6 2 826 2 1.77 6 1461.0 1 2 3 797 5 1.83 1 1459.4 2 1 4 772 10 1.74 9 1347.0 10 10 5 846 1 1.72 10 1456.7 3 4 6 804 4 1.78 4 1429.7 6 5 7 776 9 1.76 7 1369.0 9 9 8 795 7 1.81 2 1442.2 4 3 9 787 8 1.78 3 1402.0 8 8

10 796 6 1.77 5 1408.3 7 7

5. Create a resource classification model using the following parameters. You can experiment

with different probability within limits, deviation values and volumes.

6-14

Page 91: Guia de Usuario 1.3

6-15

You will need to define a category set in Object/Categories to display the measured, indicated and inferred categories.

6. Create a report for a cutoff of 0.85% Cu using the resource classification created previously.

Page 92: Guia de Usuario 1.3

7 Grade Control Tools Dig limit selection is difficult due to the uncertainty in the precise location of the material to be mined at a specific cut off grade and also due to the limitations impose by the size of the mining equipment. Proper dig limits are important because: (1) it is uneconomical to process waste as ore or to mine ore as waste, (2) low quality ore can dilute high quality ore and/or defeat blending practices, (3) large mining equipment cannot efficiently mine tortuous dig limits, and (4) ore is diluted or lost when the dig limits are too smooth.

7.1 Expected Profit Calculation Basing the ore/waste classification on profit instead of an estimated grade permits to incorporate spatial/temporal variations in the milling cost, mining cost, plant recoveries and metal prices to the grade control program. The use of profit also has the advantage of explicitly accounting for uncertainty in the grade estimation. Multiple ores can easily handle in a profit based grade control program by transforming each ore grade to profit and adding all profits. Contaminants can also be transformed to an impact on profit.

Dig limit optimization requires a 2-D grid of expected profit values. This program provides the basic functionality required for profit calculation. You may have to calculate the profit outside with a more complex program. A more flexible scripting language it is being consider for following versions of the program.

The revenue of each block for each realization is calculated as if the block were ore:

revenueore = grade • recovery(grade) • price - milling cost - ore mining cost

If revenueore is less than the waste mining cost, then it is better to ship the material as waste with revenue of “- waste mining cost”. The average over many realizations can be calculated. Thus, selecting dig limits on this map of expected profit accounts for the uncertainty in the grade model. Six variables are saved for each block:

1. Expected grade - the average over all realizations.

2. Expected profit diglimit - The profit for dig limits is positive if called ore and negative if called waste. Is the average profit overall realizations. Anything that is better off being called ore has a positive profit (even marginal material!).

3. Expected profit if called ore,

4. Expected profit if called waste,

5. Ore waste indicator, and

6. Ore-waste-marginal indicator is saved. The material that is marginal is better to go to the mill than the dump, but it still costs money, that is, it does not pay for all of the mining cost.

The parameters in the program are as follows:

• A 2D/3D gridded grade models are required with multiple realizations (a deterministic model can be used too, but there is no average over possible profits); these would normally come from SGS. The slice number is required in the case is a 3D model

7-1

Page 93: Guia de Usuario 1.3

because the output is for a single bench only. It is common to simulate the bench of interest and the bench above so that the blasthole data above the bench of interest are also used.

• A single grade is chosen. The choice of units is fraction, percent, and ppm. The grade is multiplied by 1, 0.01, and 0.000001 for each of these options. Then, the grade is considered to have the units of metal per tonne. You will need to make sure that all of the costs, price, and grade units are compatible with the equation above.

• The milling cost (including all overhead costs), ore mining cost, waste mining cost, and price go into the equation above using the grade multiplied by the factor that depends on your specification of units.

• The recovery for a particular grade is linearly interpolated between the values entered in the recovery curve. Values outside the limits are set to the limits. The units of recovery are a fraction.

The output includes a 2-D gridded model with the six variables discussed above.

7.2 Dig Limit Optimization Dig limit optimization is done using the simulated annealing optimization formalism. For this a global objective function that measures the profit associated with the material contained in the dig limit and the loss associated with the dig limit tortuosity for the associated mining equipment is evaluated, as the dig limits are change randomly. The change is accepted if the loss is reduce or possibly accepted if the loss increases. The dig limits are perturbed until the loss is minimized.

The loss incurred by the mining equipment is quantified by accounting for the toruosity of the dig limit and penalizing the profit accordingly. The penalizing function is based in the angle defined by three consecutive vertices on the dig limit polygon, small angles incur in higher penalties than large angles. The parameter digability summarize how easy a dig limit can be mined. Dig limits with low digability have maximum profit and minimum ore loss but are difficult to mine. High digability dig limits have less than the maximum profit due to equipment induced ore loss but are easy to mine.

This program takes an input polygonal dig limit and perturbs it to maximize profit and digability as defined by the user.

7-2

Page 94: Guia de Usuario 1.3

• You need the results from the expected profit program or a 2-D model with (1) profit - positive for ore and negative for waste, and (2) expected grade. This profit calculation may need to be done outside of Pangeos by a custom-written program; the results are then imported as a 2-D grid and then input to this program.

• This program maximizes the profit within the closed polygonal dig limit. The profit is multiplied by -1 if you check the “Waste” option. This optimizes a waste polygon.

• An initial polygon is required to start the dig limit optimization. This is chosen under the “Polygon” option.

• You can also enter a bounding polygon to enclose the optimized polygon. This is optional.

• The digability is a relative parameter that controls how smooth the dig limit should be. As the value approaches to zero the emphasis is on profit and a high degree of selectivity is assumed. As the parameter value increases the emphasis is in the mining equipment constrains. There is no theory for selecting this parameter, it selected based on professional judgment. A suggested approach is to build a catalogue of dig limits for a range of digability parameters.

• The optimization number is a subjective choice you make on how long you want the program to run. It is good practice to start with this chosen small and then increase it until the results are changing anymore.

The optimized polygon is added to the polygon data types.

7.3 Dig Limit Reporting The grade and profit results within an initial and optimized polygon are reported so that you can judge the value of dig limit optimization.

• Initial and optimized polygons are required.

• You need the results from the expected profit program or a 2-D model with (1) profit - positive for ore and negative for waste, and (2) expected grade.

• You can specify the material types (ore, waste and/or marginal) to be informed.

• The specific gravity can be input as a variable of the expected profit model or as a constant value for ore, waste and marginal materials.

7-3

Page 95: Guia de Usuario 1.3

• The bench height is also required.

The results are saved in a general data type.

7.4 Tutorial Seven: Grade Control The purpose of this exercise is to lead users through grade control tools in Pangeos. The software must be installed on your computer with a valid license.

For this exercise we will use the simulations create using the drillhole data john-ddh.dat. You can find the john-ddh.dat file on the installation CD or from www.statios.com. To generate the 3D model see the tutorial at the end of chapter 5. We will use the rescaled 5m grade simulations; this scale is more reasonable for dig limit definition (see tutorial at the end of chapter 6 if you have need help with the Model Rescaling tool).

1. The expected profit and grade must be calculated for dig limits. This is a very mine-specific task and a separate function may be required. We use simple parameters here to impose a cutoff of about 0.75% copper. The price was set at 1800 to achieve a cutoff of 0.75%, that is, the price would have to be about 1800 ≈ ($11/(0.8 · 0.75% · 0.01)) to break even at 0.85% copper. The costs and prices are all per-tonne.

You will need to define a category set in Object/Categories to display the ore/waste/marginal indicators.

7-4

Page 96: Guia de Usuario 1.3

The ore/waste/marginal indicator is shown below and to the left. Two versions of the expected profit map are shown. The center one has limits from -10$/t to 50$/t. The one at the right has limits from -2$/t to 2$/t to highlight the marginal areas and zones where the dig limits are important.

2. Digitized under the View an initial dig limits to be optimized. You should choose to view the expected profit for dig limits. The following is the initial and optimized polygons for the easy little area to the NorthEast. A waste polygon was selected and optimized.

7-5

Page 97: Guia de Usuario 1.3

7-6

Here are the initial and optimized polygons for another area. Note that the internal pod of waste is difficult to optimize – it would be better to setup another waste polygon.

Page 98: Guia de Usuario 1.3

8 Data Tools Data tools are used to modify, filter and convert data.

8.1 Data Tools access Data Tools are accessible either from:

• The data tab

• The view tab

The variable selected at the top of the Data Tools window is the primary variable on which the tool will be applied.

8.2 Data Tools available for all data types

Scalar The scalar tools are used to set values to variables, substract/multiply/divide variables, trim and clip.

8-1

Page 99: Guia de Usuario 1.3

• Trimming: all values below (Trim if <) or above (Trim if >) the specified value are removed (set to missing)

• Clipping: all values below (Clip if <) or above (Clip if >) the specified value are set to that value

New continuous or categorical variables can be created, initialized as all missing or as copy of another variable

Filter The tools allow setting a variable to missing, constant, or another variable, depending on conditions on a different variable.

8.3 Drillhole Data Tools

Drillhole Assign values by ID.

Model Assign grid cells in a model the value of a drillhole point within that cell, or an average of all values within that cell.

Coordinates Transform coordinates (to flattened space).

2D Linear Regression The tool creates or updates an existing 2D model or a surface, by doing a linear regression on the selected variable or a trend computation.

8.4 Model Data Tools

Model Transfers variables to another model (same dimension)

8.4.1 3D Model Data Tools

Model Computes horizontal and vertical trend (to 3D Model)

Model 2D Assigns to 2D Model (average, most likely, sum)

Model 1D Assigns to 1D Model (average, most likely, sum)

Unfold Unfold back to stratigraphic space.

8-2

Page 100: Guia de Usuario 1.3

8-3

8.4.2 2D Model Data Tools

Surface Converts to a surface. The selected variable becomes the elevation.

8.5 Surfaces Data Tools

Surface Add and substract surfaces, computes thickness, remove crossings. Assign to different grid.

Model 2D Convert to Model 2D. DOCUMENTATION INCOMPLETE

Page 101: Guia de Usuario 1.3

9 Scripting Scripting is still being implemented. It adds a level of flexibility to Pangeos through text commands that can be saved and modified to files.

9.1 Running a script Scripts can be run interactively in the Script editor window (Tools|Execute Script) or from the command line, for example: "C:\Program Files\Statios\Pangeos\Pangeos.exe"

/project "C:\Pangeos\Projects\Bill" /script "C:\Pangeos\Projects\Bill\Scripts\test4.pgsc"

9.2 Script structure A script file can be saved from the Script editor window or from any text editor. A script file is processed line by line. Lines can be broken with ‘\’, and combined with ‘;’ The first word on a line needs to be one of the following:

- ScriptVariable type (declaration) ex: int a - ScriptVariable (assignment) ex: a=4 - Keyword ex: option verbose on - # (comment) ex: # This is a comment

9.3 Keywords - option sets various options - echo writes out line - process processes rest of line

o parset parameter set o data o variable

- script runs another script - display displays a file

o postscript file o text file o RTF file

- exit exits script

9.4 ScriptVariable

9.4.1 Types - long - double - string - data - variable - parset

9-1

Page 102: Guia de Usuario 1.3

9.4.2 Declarations All variables start with ‘$’. Variable name must start with a letter or ‘_’ and contain only letters, numbers or ‘_’. Names are case sensitive Simple declaration: double $d Declaration and assignment: double $d=4.3

9.4.3 Assignment - double $d=5.1 - long $l=3 - string $s=”The value of the double is “ $d - data $dh=data type Drillhole “Drill data” - variable $vh=variable data type Drillhole “Drill data” name “Rock

type” - variable $vh=variable $dh name “Rock type” - parset $p=parset group “Plotting” “Histogram Plot” “Au”

The Script Editor window fills out the syntax.

9.4.4 Special ScriptVariables - $date 5/24/2006 - $time 10:50 AM - $longdate Wednesday, May 24, 2006 - $longtime 10:50:40 AM - $shortdate 20060524 - $shorttime 105040 - $year 2006 - $month 5 - $day 24 - $dayofweek Wednesday - $dayofyear 144 - $hour 10 - $minute 50 - $second 40 - $eol End of line character - $tab Tab character - $output Result of previous command

9.5 Computation

9.5.1 Syntax Computation on a variable follow the syntax “compute $d filename” With $d being a data object that defines the output and filename the name of the file containing the computations. data $dh=data type Drillhole "ODam October 06 5m Comps" compute $dh compute01.cs

9-2

Page 103: Guia de Usuario 1.3

9.5.2 Computation file The computation file follows a syntax close to the C programming language. It can contain 3 blocks starting with keywords:

- BEGIN computations performed before the main loop - MAIN computations performed for each data point - END computations performed after the main loop

9.5.2.1 BEGIN This part usually includes variable declarations and initializations. Variables are declared in C style, and do not have ‘$’ at the beginning like ScriptVariables. Any ScriptVariable created in the calling script is available in the computation script. BEGIN double Cu_Rec_rc; double Au_Rec; double Ag_Rec;

9.5.2.2 MAIN This part is performed for each data point in the output data object. If the object is a 3D grid, for example, the computation will be performed on each grid node. A ScriptVariable can be assigned to in the computation script. In the following example, the variables $Cu_Grade, $Au_Grade and $Ag_Grade are used in the computation. $NetRevenue is computed in the script. MAIN if ($Cu_Grade>0) Cu_Rec_rc=97.081-0.9109/$Cu_Grade; else Cu_Rec_rc=0; Cu_Rec=Cu_Rec_rc*0.9408; Au_Rec=$Au_Grade*70.56; if ($Ag_Grade>0) { Ag_Rec=33.8706816*pow(1/$Ag_Grade, -0.4051); if (Ag_Rec>90) Ag_Rec=90; } else Ag_Rec=0; … $NetRevenue=Revenue-CostIfOre;

9.5.2.3 END Computations can be specified in the END section to be performed after the MAIN section.

9.5.3 A few elements of C syntax • if-else

9-3

Page 104: Guia de Usuario 1.3

if (a>0) { b=a*2; } else { b=2; }

• if-else compact version (one statement) if (a>0) b=a*2; else b=2;

• Comparison if (a==0) b=0; if (a!=0) b=1;

9.5.4 Recognized functions The following mathematical functions are recognized (case-sensitive):

• abs($var) • cos($var) • sin($var) • log10($var) • log($var) • exp($var) • pow($var,$pow) • sqrt($var) • tan($var) • isMissing($var)

9.6 Examples

9.6.1 Building filename for logging string $filename="c:\test_" $shortdate ".dat“ echo $filename option LogFile $filename overwrite Output: c:\test_20060524.dat

9.6.2 Reporting time string $sd=$longdate; string $st=$longtime echo "Date is: " $sd $eol "Time is: " $st Output: Date is: Wednesday, May 24, 2006 Time is: 10:31:32 AM

9.6.3 Computation Calling script data $sgs=data type Model3DData "SGS" variable $Cu_Grade=variable $sgs name "cu" variable $Ag_Grade=variable $sgs name "ag"

9-4

Page 105: Guia de Usuario 1.3

variable $Au_Grade=variable $sgs name "au" variable $Revenue process $Revenue createvariable continuous "Revenue" compute $sgs compute.cs Computation script (compute.cs) BEGIN double Cu_Rec_rc; double Au_Rec; double Ag_Rec; double U_Rec; double Cu_Rec; double s_fgs; double tonnes; double tonnes_srf; double tonnes_sf; double Revenue; double CostIfOre; double CostIfWaste; MAIN if ($Cu_Grade>0) Cu_Rec_rc=97.081-0.9109/$Cu_Grade; else Cu_Rec_rc=0; Cu_Rec=Cu_Rec_rc*0.9408; Au_Rec=$Au_Grade*70.56; if ($Ag_Grade>0) { Ag_Rec=33.8706816*pow(1/$Ag_Grade, -0.4051); if (Ag_Rec>90) Ag_Rec=90; } else Ag_Rec=0; U_Rec=0.01822464*$U_Grade+60.9559968; if (U_Rec>81.6) U_Rec=81.6; if ($BaS_Grade>0) { $CuS_Ratio=$Cu_Grade/$BaS_Grade; if ($CuS_Ratio>4) $CuS_Ratio=4; } else $CuS_Ratio=0; s_fgs=100*$Cu_Grade/(1.368+41.833*$CuS_Ratio-8.964*$CuS_Ratio*$CuS_Ratio+0.861*$CuS_Ratio*$CuS_Ratio*$CuS_Ratio); tonnes=135*$SG_Grade; tonnes_srf=s_fgs*Cu_Rec_rc/10000*tonnes; tonnes_sf=tonnes_srf*1.2; Revenue=tonnes*($Cu_Grade*Cu_Rec/100*$Cu_Price+$U_Grade*U_Rec/100*$U_Price+$Au_Grade*Au_Rec/100*$Au_Price+$Ag_Grade*Ag_Rec/100*$Ag_Price);

9-5

Page 106: Guia de Usuario 1.3

9-6

CostIfOre=tonnes*$Processing_Cost+tonnes_sf*$Smelting_Cost+tonnes*$Cu_Grade/100*Cu_Rec/100*$Cu_Cost+tonnes*$U_Grade/1000000*U_Rec/100*$U_Cost; CostIfWaste=-1.05*tonnes; $NetRevenue=Revenue-CostIfOre; if ($Cu_Grade<0.2) $NetRevenue=CostIfWaste; if ($FlagUG==1) $NetRevenue=0; else if ($FlagUG==2) $NetRevenue=-311.85; $Cu=$Cu_Grade; $SG=$SG_Grade; $Ag=$Ag_Grade; $Au=$Au_Grade; $BaS=$BaS_Grade; $U=$U_Grade; END

Page 107: Guia de Usuario 1.3

10 Visualization

10.1 Window

10.2 Keyboard shortcuts Key/Mouse Action c Toggle colormap b Toggle background color + Start zoom 0 Zoom off S Vertical exaggeration up Mouse wheel up Vertical exaggeration up s Vertical exaggeration down Mouse wheel down Vertical exaggeration down x yz plane y xz plane z xy plane Up Go up one slice PageUp Go up 10 slices Down Go down one slice

10-1

Page 108: Guia de Usuario 1.3

10-2

PageDown Go down 10 slices Left mouse button Zoom in Shift+Left mouse button Zoom out Middle mouse button Pan Shift+Right mouse button Zoom off Double click Left mouse button Zoom off Ctrl right mouse button Edit (if available)