27
Data Mining

Business Intelligence & Data Mining-6-7

Embed Size (px)

DESCRIPTION

Business Intelligence & Data Mining-6-7

Citation preview

Page 1: Business Intelligence & Data Mining-6-7

Data Mining

Page 2: Business Intelligence & Data Mining-6-7

Data Mining

Data Mining is the process of discovering meaningful new correlations, patterns and trends by sifting through large amounts of data stored in repositories, by using pattern recognition technologies as well as statistical and mathematical techniques.

Page 3: Business Intelligence & Data Mining-6-7

Data Mining Functionalities

• Discrimination– Generalize, summarize, and contrast data characteristics,

e.g., dry vs. wet regions (in image processing)

• Association (correlation and causality)– age(X, “20..29”) ^ income(X, “20..29K”) à buys(X, “PC”)– buys(X, “computer”) à buys(X, “software”)

• Classification and Prediction– Finding models that describe and distinguish classes or

concepts for future prediction– Prediction of some unknown or missing values

Page 4: Business Intelligence & Data Mining-6-7

Data Mining Functionalities

• Cluster detection– Group data to form new classes, e.g., cluster customers based on

similarity of some sort– Based on the principle: maximizing the intra-class similarity and

minimizing the interclass similarity

• Outlier analysis– Outlier: a data object that does not comply with the general

behavior of the data– It can be considered as noise or exception but is quite useful in

fraud detection, rare events analysis

Page 5: Business Intelligence & Data Mining-6-7

Supervised vs. Unsupervised Learning• Supervised learning (e.g. classification)

– Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations or actual outcome

– New data is classified based on the model obtained from the training set

• Unsupervised learning (e.g. clustering)– The class labels of training data are unknown

– Given a set of measurements, observations, etc., the aim is to establish the existence of classes, clusters, links or associations in the data

Page 6: Business Intelligence & Data Mining-6-7

Classification

Page 7: Business Intelligence & Data Mining-6-7

• Classification:– predicts categorical class labels– classifies data (constructs a model) based on the training set

and the values (class labels) of a target / class attribute and uses it for classifying new data

• Prediction:– models continuous-valued functions, i.e., predicts unknown

or missing values

• Typical Applications of Classification– credit approval– target marketing– medical diagnosis

Classification vs. Prediction

Page 8: Business Intelligence & Data Mining-6-7

Classification—A Two-Step Process• Model construction: describing a set of predetermined classes

– Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute

– The set of tuples used for model construction is known as the training set

– The model is represented as classification rules, decision trees, a trained neural network or mathematical formulae

– Estimate accuracy of the model• The known label of test sample is compared with the classified

result from the model• Accuracy rate is the percentage of test set samples that are

correctly classified by the model• Test set is independent of training set, otherwise over-fitting can

occur

• Model usage: for classifying future or unknown objects

Page 9: Business Intelligence & Data Mining-6-7

Classification Process: Model Construction

TrainingData

NAME RANK YEARS Ind. Chrg.Mike Assistant Prof 3 noMary Assistant Prof 7 yesBill Professor 2 yesJim Associate Prof 7 yesDave Assistant Prof 6 noAnne Associate Prof 3 no

ClassificationAlgorithms

IF rank = ‘professor’OR years > 6THEN Ind. Chrg. = ‘yes’

Classifier(Model)

Page 10: Business Intelligence & Data Mining-6-7

Classification Process: Use the Model (Testing & Prediction)

Classifier

TestingData

NAME RANK YEARS Ind. Chrg.Tom Assistant Prof 2 noMerlisa Associate Prof 7 noGeorge Professor 5 yesJoseph Assistant Prof 7 yes

Unseen Data

(Jeff, Professor, 4)

Ind. Chrg.?

Page 11: Business Intelligence & Data Mining-6-7

Data Preparation for Classification and Prediction

• Data cleaning– Preprocess data in order to reduce noise and handle missing

values

• Relevance analysis (feature selection)– Remove the irrelevant or redundant attributes

• Data transformation– Generalize and/or normalize data

Page 12: Business Intelligence & Data Mining-6-7

Decision Tree Classification Algorithms

Page 13: Business Intelligence & Data Mining-6-7

Income group 1 age < 27

Income group 1 age > 27

Income group 2 Income group 3 or 4

age >41

Income group 3 age < 41

Income group 4

age < 41

Decision Tree Algorithms

Saw Birdcage

Saw Courage Under Fire Saw Nutty Professor

Saw Independence Day

Saw BirdcageSaw Courage Under Fire

Page 14: Business Intelligence & Data Mining-6-7

Classification by Decision Tree Induction• Decision tree

– A tree structure– Internal node denotes a test on an attribute– Branch represents an outcome of the test– Leaf nodes represent class labels

• Decision tree generation consists of two phases– Tree construction

• At start, all the training examples are at the root• Partition examples recursively based on selected attributes

– Tree pruning• Identify and remove branches that reflect noise or outliers

• Use of decision tree: Classifying an unknown sample– Test the attribute values of the sample against the decision tree

Page 15: Business Intelligence & Data Mining-6-7

Training Datasetage income student credit_rating buys_computer

<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

Page 16: Business Intelligence & Data Mining-6-7

Output: A Decision Tree for“buys_computer”

age?

overcast

student? credit rating?

no yes fairexcellent

<=30 >40

no noyes yes

yes

30..40

Page 17: Business Intelligence & Data Mining-6-7

Algorithm for Decision Tree Induction

• Basic algorithm (a greedy algorithm)– Tree is constructed in a top-down recursive divide-and-conquer manner– At start, all the training examples are at the root– Some algorithms assume attributes to be categorical or ordinal (if

continuous-valued, they are discretized in advance)– Examples are partitioned recursively based on selected attributes– Test attributes are selected on the basis of a heuristic or statistical

measure (e.g., information gain)

• Conditions for stopping partitioning– All samples for a given node belong to the same class– There are no remaining attributes for further partitioning – majority

voting is employed for classifying the leaf– There are no samples left

Page 18: Business Intelligence & Data Mining-6-7

Attribute Selection Measure

• Information gain (ID3/C4.5)– All attributes are assumed to be categorical– Can be modified for continuous-valued

attributes

• Gini index (IBM IntelligentMiner)– All attributes are assumed continuous-valued– Assume there exist several possible split values

for each attribute– Can be modified for categorical attributes

Page 19: Business Intelligence & Data Mining-6-7

Information Gain (ID3/C4.5)

• Select the attribute with the highest information gain

• Assume there are two classes, P and N

– Let the set of examples S contain p elements of class P and nelements of class N

– The amount of information, needed to decide if an arbitrary example in S belongs to P or N is defined as

npn

npn

npp

npp

npI++

−++

−= 22 loglog),(

Page 20: Business Intelligence & Data Mining-6-7

Information Gain in Decision Tree Induction

• Assume that using attribute A a set S will be partitioned into subsets {S1, S2 , …, Sk} – If Si contains pi examples of P and ni examples of N, the

entropy, or the expected information needed to classify objects in all subsets Si is

• The encoding information that would be gained by branching on A is

∑= +

+=

k

iii

ii npInpnp

AE1

),()(

)(),()( AEnpIAGain −=

Page 21: Business Intelligence & Data Mining-6-7

Attribute Selection by Information Gain Computation

gClass P: buys_computer = “yes”

gClass N: buys_computer = “no”

g I(p, n) = I(9, 5) =0.940

gCompute the entropy for age:

Hence

= 0.94 – 0.69 = 0.25Similarly

age pi ni I(pi, ni)<=30 2 3 0.97130…40 4 0 0>40 3 2 0.971

69.0)2,3(145

)0,4(144

)3,2(145

)(

=+

+=

I

IIageE

048.0)_(151.0)(029.0)(

===

ratingcreditGainstudentGainincomeGain

)(),()( ageEnpIageGain −=

Page 22: Business Intelligence & Data Mining-6-7

Gini Index (IBM IntelligentMiner)• If a data set T contains examples from n classes, gini index,

gini(T) is defined as

where pj is the relative frequency of class j in T.• If a data set T is split into two subsets T1 and T2 with sizes N1

and N2 respectively, the gini index of the split data contains examples from n classes, the gini index gini(T) is defined as

• The attribute that provides the smallest ginisplit(T) is chosen to split the node

∑=

−=n

jp jTgini

1

21)(

)()()( 22

11 Tgini

NN

TginiNNTgini split +=

Page 23: Business Intelligence & Data Mining-6-7

Avoid Overfitting in Classification

• The generated tree may overfit the training data– Too many branches, some may reflect anomalies due to

noise or outliers– Result is in poor accuracy for unseen samples

• Two approaches to avoid overfitting – Prepruning: Halt tree construction early—do not split a

node if this would result in the goodness measure falling below a threshold

• Difficult to choose an appropriate threshold– Postpruning: Remove branches from a “fully grown”

tree—get a sequence of progressively pruned trees• Use a set of data different from the training data to

decide which is the “best pruned tree”

Page 24: Business Intelligence & Data Mining-6-7

Approaches to Determine the Final Tree Size

• Separate training (2/3) and testing (1/3) sets

• Use cross validation, e.g., 10-fold cross validation

• Use all the data for training

– but apply a statistical test (e.g., chi-square) to estimate whether expanding or pruning a node may improve the entire distribution

• Use minimum description length (MDL) principle:

– halting growth of the tree when the encoding is minimized

Page 25: Business Intelligence & Data Mining-6-7

Classification Accuracy: Estimating Error Rates

• Partition: Training-and-testing

– use two independent data sets, e.g., training set (2/3), test set(1/3)

– used for data set with large number of samples

• Cross-validation

– divide the data set into k subsamples

– use k-1 subsamples as training data and one sub-sample as test data --- k-fold cross-validation

– for data set with moderate size

• Bootstrapping (leave-one-out)

– for small size data

Page 26: Business Intelligence & Data Mining-6-7

Boosting

• Boosting increases classification accuracy– Applicable to decision trees – Learn a series of classifiers, where each

classifier in the series pays more attention to the examples misclassified by its predecessor

• Boosting requires only linear time and constant space

Page 27: Business Intelligence & Data Mining-6-7

Decision Trees (Strengths & Weaknesses)

• Generates understandable rules• Scalability: Fast classification of new cases - can

classify data sets with millions of examples and hundreds of attributes with reasonable speed

• Handles both continuous and categorical values• Indicates best fields• Comparable classification accuracy with other

methods• Expensive to train (but still better than most other

classification methods)• Uses rectangular regions• Accuracy can decrease due to over-fitting of data