58
Data Preprocessing CS 536 – Data Mining These slides are adapted from J. Han and M. Kamber’s book slides (http://www.cs.sfu.ca/~han)

Data Preprocessing CS 536 – Data Mining These slides are adapted from J. Han and M. Kamber’s book slides (han)

  • View
    224

  • Download
    1

Embed Size (px)

Citation preview

Data Preprocessing

CS 536 – Data Mining

These slides are adapted from J. Han and M. Kamber’s book slides (http://www.cs.sfu.ca/~han)

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

2

Representation of Data

Data can be represented in different ways Different types of values are used for attributes or

features Understanding the semantics of each type is

important in data analysis and mining Types of values

Numeric or symbolic (or categoric) Continuous or discrete Static and dynamic

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

3

Numeric and Symbolic Values

Numeric values Real or integeral Ordering (less than, greater than, and equal to

relationships hold) Distance relationship (difference between values)

Symbolic values Equality relationship holds only Can be converted to numeric symbols; however,

these symbolic values, represented as numbers, do not have the properties of numeric values

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

4

Continuous and Discrete Variables

Continuous variables Also known as quantitative or metric variables Theoretically, they are measured with infinite

precision Interval or ratio scale Represented by number (real or integer), not

symbols Discrete variables

Also known as qualitative variables Represented by symbols Nominal or ordinal scale Periodic variable – special type of discrete variable

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

5

Static and Dynamic Variables

Static variables No consideration of time

Dynamic or temporal variables Time dependent

Most real-world data are dynamic. However, dynamic data often need additional preprocessing before data mining techniques can be applied effectively.

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

6

The “Curse of Dimensionality”

Data mining deals with large amounts of data samples or records. Furthermore, samples may have large dimensionality (large number of attributes or features)

The curse of dimensionality In a high-dimensional space, exponentially more

samples are needed to produce the same density than in a lower dimensional space

Data analysis and mining techniques are based on statistics, which are data density dependent.

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

7

Properties of High-Dimension Spaces (1)

The size of a data set yielding the same density of data points in an k-dimensional space increases exponentially with k (nk points needed in k-dimensions)

Because of this the density of data is often low and unsatisfactory for data analysis and mining purposes

A larger radius is needed to enclose a fraction of the data points in a high-dimensional space

A large neighborhood is needed to capture even a fraction of the samples in a high-dimensional space

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

8

Properties of High-Dimensional Spaces (2)

Almost every point is closer to an edge than to another sample point in a high-dimensional space

Almost every point is an outlier

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

9

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

10

Why Data Preprocessing?

Data in the real world is dirty incomplete: lacking attribute values, lacking

certain attributes of interest, or containing only aggregate data

noisy: containing errors or outliers inconsistent: containing discrepancies in codes

or names No quality data, no quality mining results!

Quality decisions must be based on quality data

No quality data, inefficient mining process! Complete, noise-free, and consistent data

means faster algorithms

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

11

Multi-Dimensional Measure of Data Quality

A well-accepted multidimensional view: Accuracy Completeness Consistency Timeliness Believability Value added Interpretability Accessibility

Broad categories: intrinsic, contextual, representational, and

accessibility.

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

12

Major Tasks in Data Preprocessing

Data cleaning Fill in missing values, smooth noisy data, identify or

remove outliers, and resolve inconsistencies Data integration

Integration of multiple databases, data cubes, or files Data transformation

Normalization and aggregation Data reduction

Obtains reduced representation in volume but produces the same or similar analytical results

Data discretization Part of data reduction but with particular importance,

especially for numerical data

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

13

Forms of data preprocessing

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

14

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

15

Data Cleaning

Data cleaning tasks

Fill in missing values

Identify outliers and smooth out noisy data

Correct inconsistent data

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

16

Missing Data

Data is not always available

E.g., many tuples have no recorded value for several attributes, such as customer income in sales data

Missing data may be due to

equipment malfunction inconsistent with other recorded data and thus

deleted data not entered due to misunderstanding certain data may not be considered important at the

time of entry not register history or changes of the data

Missing data may need to be inferred.

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

17

How to Handle Missing Data?

Ignore the tuple: usually done when class label is missing

(assuming the tasks in classification—not effective when the

percentage of missing values per attribute varies considerably.

Fill in the missing value manually: tedious + infeasible?

Use a global constant to fill in the missing value: e.g., “unknown”,

a new class?!

Use the attribute mean to fill in the missing value

Use the attribute mean for all samples belonging to the same class

to fill in the missing value: smarter

Use the most probable value to fill in the missing value: inference-

based such as Bayesian formula or decision tree

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

18

Noisy Data

Noise: random error or variance in a measured variable

Incorrect attribute values may be due to faulty data collection instruments data entry problems data transmission problems technology limitation inconsistency in naming convention

Other data problems which requires data cleaning duplicate records incomplete data inconsistent data

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

19

How to Handle Noisy Data?

Binning method: first sort data and partition into (equi-depth) bins then one can smooth by bin means, smooth by

bin median, smooth by bin boundaries, etc. Clustering

detect and remove outliers Combined computer and human inspection

detect suspicious values and check by human Regression

smooth by fitting the data into regression functions

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

20

Simple Discretization Methods: Binning

Equal-width (distance) partitioning: It divides the range into N intervals of equal size:

uniform grid if A and B are the lowest and highest values of the

attribute, the width of intervals will be: W = (B-A)/N. The most straightforward But outliers may dominate presentation Skewed data is not handled well.

Equal-depth (frequency) partitioning: It divides the range into N intervals, each containing

approximately same number of samples Good data scaling Managing categorical attributes can be tricky.

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

21

Binning Methods for Data Smoothing

* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34

* Partition into (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34* Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29* Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

22

Cluster Analysis

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

23

Regression

x

y

y = x + 1

X1

Y1

Y1’

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

24

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

25

Data Integration

Data integration: combines data from multiple sources into a coherent store

Schema integration integrate metadata from different sources Entity identification problem: identify real world entities

from multiple data sources, e.g., A.cust-id B.cust-# Detecting and resolving data value conflicts

for the same real world entity, attribute values from different sources are different

possible reasons: different representations, different scales, e.g., metric vs. British units

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

26

Handling Redundant Data in Data Integration

Redundant data occur often when integration of multiple databases The same attribute may have different names in

different databases One attribute may be a “derived” attribute in

another table, e.g., annual revenue Redundant data may be able to be detected by

correlational analysis Careful integration of the data from multiple sources

may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

27

Data Transformation

Smoothing: remove noise from data Aggregation: summarization, data cube construction Generalization: concept hierarchy climbing Normalization: scaled to fall within a small, specified range

min-max normalization z-score normalization normalization by decimal scaling

Attribute/feature construction New attributes constructed from the given ones

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

28

Data Transformation: Normalization

min-max normalization

z-score normalization

normalization by decimal scaling

AAA

AA

A

minnewminnewmaxnewminmax

minvv _)__('

A

A

devstand

meanvv

_'

j

vv

10' Where j is the smallest integer such that Max(| |)<1'v

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

29

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

30

Data Reduction Strategies

Warehouse may store terabytes of data: Complex data analysis/mining may take a very long time to run on the complete data set

Data reduction Obtains a reduced representation of the data set that is

much smaller in volume but yet produces the same (or almost the same) analytical results

Data reduction strategies Data cube aggregation Dimensionality reduction Data compression Numerosity reduction Discretization and concept hierarchy generation

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

31

Data Cube Aggregation

The lowest level of a data cube the aggregated data for an individual entity of

interest e.g., a customer in a phone calling data warehouse.

Multiple levels of aggregation in data cubes Further reduce the size of data to deal with

Reference appropriate levels Use the smallest representation which is enough to

solve the task Queries regarding aggregated information should be

answered using data cube, when possible

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

32

Dimensionality Reduction

Feature selection (i.e., attribute subset selection): Select a minimum set of features such that the

probability distribution of different classes given the values for those features is as close as possible to the original distribution given the values of all features

reduce # of attributes in the patterns, easier to understand

Heuristic methods (due to exponential # of choices): step-wise forward selection step-wise backward elimination combining forward selection and backward

elimination decision-tree induction

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

34

Example of Decision Tree Induction

Initial attribute set:{A1, A2, A3, A4, A5, A6}

A4 ?

A1? A6?

Class 1 Class 2 Class 1 Class 2

> Reduced attribute set: {A1, A4, A6}

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

35

Data Compression

String compression There are extensive theories and well-tuned

algorithms Typically lossless But only limited manipulation is possible without

expansion Audio/video compression

Typically lossy compression, with progressive refinement

Sometimes small fragments of signal can be reconstructed without reconstructing the whole

Time sequence is not audio Typically short and vary slowly with time

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

36

Data Compression

Original Data Compressed Data

lossless

Original DataApproximated

lossy

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

37

Wavelet Transforms

Discrete wavelet transform (DWT): linear signal processing Compressed approximation: store only a small fraction of the

strongest of the wavelet coefficients Similar to discrete Fourier transform (DFT), but better lossy

compression, localized in space Method:

Length, L, must be an integer power of 2 (padding with 0s, when necessary)

Each transform has 2 functions: smoothing, difference Applies to pairs of data, resulting in two set of data of length L/2 Applies two functions recursively, until reaches the desired

length

Haar2 Daubechie4

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

38

Given N data vectors from k-dimensions, find c <= k orthogonal vectors that can be best used to represent data

The original data set is reduced to one consisting of N data vectors on c principal components (reduced dimensions)

Each data vector is a linear combination of the c principal component vectors

Works for numeric data only Used when the number of dimensions is large

Principal Component Analysis

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

39

X1

X2

Y1

Y2

Principal Component Analysis

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

40

Numerosity Reduction

Parametric methods Assume the data fits some model, estimate

model parameters, store only the parameters, and discard the data (except possible outliers)

Log-linear models: obtain value at a point in m-D space as the product on appropriate marginal subspaces

Non-parametric methods Do not assume models Major families: histograms, clustering, sampling

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

41

Regression and Log-Linear Models

Linear regression: Data are modeled to fit a straight

line

Often uses the least-square method to fit the line

Multiple regression: allows a response variable Y to

be modeled as a linear function of multidimensional

feature vector

Log-linear model: approximates discrete

multidimensional probability distributions

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

42

Linear regression: Y = + X Two parameters , and specify the line and are to be

estimated by using the data at hand. using the least squares criterion to the known values of

Y1, Y2, …, X1, X2, …. Multiple regression: Y = b0 + b1 X1 + b2 X2.

Many nonlinear functions can be transformed into the above.

Log-linear models: The multi-way table of joint probabilities is approximated

by a product of lower-order tables. Probability: p(a, b, c, d) = ab acad bcd

Regress Analysis and Log-Linear Models

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

43

Histograms

A popular data reduction technique

Divide data into buckets and store average (sum) for each bucket

Can be constructed optimally in one dimension using dynamic programming

Related to quantization problems.

0

5

10

15

20

25

30

35

40

10000

20000

30000

40000

50000

60000

70000

80000

90000

100000

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

44

Clustering

Partition data set into clusters, and one can store cluster

representation only

Can be very effective if data is clustered but not if data is

“smeared”

Can have hierarchical clustering and be stored in multi-

dimensional index tree structures

There are many choices of clustering definitions and

clustering algorithms, further detailed in later in course.

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

45

Sampling

Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data

Choose a representative subset of the data Simple random sampling may have very poor

performance in the presence of skew Develop adaptive sampling methods

Stratified sampling: Approximate the percentage of each class (or

subpopulation of interest) in the overall database Used in conjunction with skewed data

Sampling may not reduce database I/Os (page at a time).

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

46

Sampling

SRSWOR

(simple random

sample without

replacement)

SRSWR

Raw Data

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

47

Sampling

Raw Data Cluster/Stratified Sample

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

48

Hierarchical Reduction

Use multi-resolution structure with different degrees of reduction

Hierarchical clustering is often performed but tends to define partitions of data sets rather than “clusters”

Parametric methods are usually not amenable to hierarchical representation

Hierarchical aggregation An index tree hierarchically divides a data set into

partitions by value range of some attributes Each partition can be considered as a bucket Thus an index tree with aggregates stored at each

node is a hierarchical histogram

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

49

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

50

Discretization

Three types of attributes: Nominal — values from an unordered set Ordinal — values from an ordered set Continuous — real numbers

Discretization: divide the range of a continuous attribute into intervals Some classification algorithms only accept categorical

attributes. Reduce data size by discretization Prepare for further analysis

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

51

Discretization and Concept hierachy

Discretization reduce the number of values for a given

continuous attribute by dividing the range of the attribute into intervals. Interval labels can then be used to replace actual data values.

Concept hierarchies reduce the data by collecting and replacing

low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior).

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

52

Discretization and concept hierarchy generation for numeric data

Binning (see slides before)

Histogram analysis (see slides before)

Clustering analysis (see slides before)

Entropy-based discretization

Segmentation by natural partitioning

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

53

Entropy-Based Discretization

Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the entropy after partitioning is

The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization.

The process is recursively applied to partitions obtained until some stopping criterion is met, e.g.,

Experiments show that it may reduce data size and improve classification accuracy

E S TSEnt

SEntS S S S( , )

| |

| |( )

| |

| |( ) 1

12

2

Ent S E T S( ) ( , )

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

54

Segmentation by natural partitioning

3-4-5 rule can be used to segment numeric data into

relatively uniform, “natural” intervals.

* If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equi-width intervals

* If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals

* If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

55

Example of 3-4-5 rule

(-$4000 -$5,000)

(-$400 - 0)

(-$400 - -$300)

(-$300 - -$200)

(-$200 - -$100)

(-$100 - 0)

(0 - $1,000)

(0 - $200)

($200 - $400)

($400 - $600)

($600 - $800) ($800 -

$1,000)

($2,000 - $5, 000)

($2,000 - $3,000)

($3,000 - $4,000)

($4,000 - $5,000)

($1,000 - $2, 000)

($1,000 - $1,200)

($1,200 - $1,400)

($1,400 - $1,600)

($1,600 - $1,800) ($1,800 -

$2,000)

msd=1,000 Low=-$1,000 High=$2,000Step 2:

Step 4:

Step 1: -$351 -$159 profit $1,838 $4,700

Min Low (i.e, 5%-tile) High(i.e, 95%-0 tile) Max

count

(-$1,000 - $2,000)

(-$1,000 - 0) (0 -$ 1,000)

Step 3:

($1,000 - $2,000)

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

56

Concept hierarchy generation for categorical data

Specification of a partial ordering of attributes explicitly at the schema level by users or experts

Specification of a portion of a hierarchy by explicit data

grouping

Specification of a set of attributes, but not of their partial

ordering

Specification of only a partial set of attributes

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

57

Specification of a set of attributes

Concept hierarchy can be automatically generated based on the number of distinct values per attribute in the given attribute set. The attribute with the most distinct values is placed at the lowest level of the hierarchy.

country

province_or_ state

city

street

15 distinct values

65 distinct values

3567 distinct values

674,339 distinct values

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

58

Summary

Data preparation is a big issue for both warehousing and

mining

Data preparation includes

Data cleaning and data integration

Data reduction and feature selection

Discretization

A lot a methods have been developed but still an active

area of research

CS 536 - Data Mining (Au 2003/04) - Asim Karim @ LUMS

59

References

D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse

environments. Communications of ACM, 42:73-78, 1999.

Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of

the Technical Committee on Data Engineering, 20(4), December 1997.

D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999.

T. Redman. Data Quality: Management and Technology. Bantam Books,

New York, 1992.

Y. Wand and R. Wang. Anchoring data quality dimensions ontological

foundations. Communications of ACM, 39:86-95, 1996.

R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality

research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995.