18
Page | 1 Institute of Technology & Management Bhilwara (Raj.) - 311001 Name: ………………………………………… Roll No: …………………………… Semester: …………………………………….. Branch: Information Technology Name of Lab: ………………………………………………………………………….. Index S.No. Assignment Date Grade Signature 1. What is Data Mining? / /2011 2. What is Data Ware Housing? / /2011 3. What is Schema? Explain the types of schema. / /2011 4. What is Extract, Transform, Load [ETL] in data warehousing? / /2011 5. Explain Multidimensional Cube. / /2011 6. What is Clustering? Explain the types of Clustering. / /2011 7. Explain Weka Tool Software. / /2011 8. Describe Attribute Relation File Format / /2011 9. Explain KNN of Graph. / /2011

DMW Lab File Work

Embed Size (px)

Citation preview

Page 1: DMW Lab File Work

Page | 1

Institute of Technology & Management Bhilwara (Raj.) - 311001

Name: ………………………………………… Roll No: ……………………………

Semester: …………………………………….. Branch: Information Technology

Name of Lab: …………………………………………………………………………..

Index

S.No. Assignment Date Grade Signature

1. What is Data Mining? / /2011

2. What is Data Ware Housing? / /2011

3. What is Schema? Explain the types of

schema.

/ /2011

4. What is Extract, Transform, Load [ETL]

in data warehousing?

/ /2011

5. Explain Multidimensional Cube. / /2011

6. What is Clustering? Explain the types of

Clustering.

/ /2011

7. Explain Weka Tool Software. / /2011

8. Describe Attribute Relation File Format / /2011

9. Explain KNN of Graph. / /2011

Page 2: DMW Lab File Work

Page | 2

Q 1. What is Data Mining?

Ans. Data Mining (the analysis step of the Knowledge Discovery in Databases process, or KDD), a

relatively young and interdisciplinary field of computer science, is the process of extracting patterns from

large data sets by combining methods from statistics and artificial intelligence with database management.

With recent tremendous technical advances in processing power, storage capacity, and inter-connectivity

of computer technology, data mining is seen as an increasingly important tool by modern business to

transform unprecedented quantities of digital data into business intelligence giving an informational

advantage. It is currently used in a wide range of profiling practices, such

as marketing, surveillance, fraud detection, and scientific discovery. The growing consensus that data

mining can bring real value has led to an explosion in demand for novel data mining technologies.

The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques

to sample portions of the larger population data set that are (or may be) too small for reliable statistical

inferences to be made about the validity of any patterns discovered. These techniques can, however, be

used in the creation of new hypotheses to test against the larger data populations. Data mining consists of

an iterative sequence of the following steps:-

1. Data Cleaning.

2. Data Integration.

3. Data Selection.

4. Data Transformation.

5. Data Mining.

6. Pattern Evaluation.

7. Knowledge Presentation.

Data mining commonly involves four classes of tasks.

1. Clustering – is the task of discovering groups and structures in the data that are in some way or

another "similar", without using known structures in the data.

2. Classification – is the task of generalizing known structure to apply to new data. For example, an

email program might attempt to classify an email as legitimate or spam. Common algorithms

include decision tree learning, nearest neighbor, naive Bayesian classification, neural

networks and support vector machines.

3. Regression – Attempts to find a function which models the data with the least error.

4. Association rule learning – Searches for relationships between variables. For example a

supermarket might gather data on customer purchasing habits. Using association rule learning, the

supermarket can determine which products are frequently bought together and use this information

for marketing purposes. This is sometimes referred to as market basket analysis.

Page 3: DMW Lab File Work

Page | 3

Q 2. What is Data Ware Housing?

Ans. A data warehouse (DW) is a database used for reporting. The data is offloaded from the operational

systems for reporting. The data may pass through an operational data store for additional operations before

it is used in the DW for reporting.

A data warehouse maintains its functions in three layers: staging, integration, and access. Staging is used

to store raw data for use by developers (analysis and support). The integration layer is used to integrate

data and to have a level of abstraction from users. The access layer is for getting data out for users.

This definition of the data warehouse focuses on data storage. The main source of the data is cleaned,

transformed, catalogued and made available for use by managers and other business professionals for data

mining, online analytical processing, market research and decision support (Marakas & OBrien 2009).

However, the means to retrieve and analyze data, to extract, transform and load data, and to manage

the data dictionary are also considered essential components of a data warehousing system. Many

references to data warehousing use this broader context. Thus, an expanded definition for data

warehousing includes business intelligence tools, tools to extract, transform and load data into the

repository, and tools to manage and retrieve metadata.

Architecture of Data Warehousing.

1. Operational database layer

The source data for the data warehouse — An organization's Enterprise Resource

Planning systems fall into this layer.

2. Data access layer

The interface between the operational and informational access layer — Tools to extract,

transform, load data into the warehouse fall into this layer.

3. Metadata layer

The data dictionary — This is usually more detailed than an operational system data dictionary.

There are dictionaries for the entire warehouse and sometimes dictionaries for the data that can be

accessed by a particular reporting and analysis tool.

4. Informational access layer

The data accessed for reporting and analyzing and the tools for reporting and analyzing data —

This is also called the data mart. Business intelligence tools fall into this layer. The Inmon-Kimball

differences about design methodology, discussed later in this article, have to do with this layer

Page 4: DMW Lab File Work

Page | 4

Q 3. What is Schema? Explain the types of schema.

Ans. A Schema of a database system is its structure described in a formal language supported by the

database (DBMS) and refers to the organization of data to create a blueprint of how a database will be

constructed (divided into database tables). The formal definition of database schema is a set of formulas

(sentences) that specify constraints imposed on the database. All constraints are expressible in the same

language. A database can be seen in terms of logic as a structure in a realization of the database

language. The states of a created conceptual schema are transformed into an explicit mapping, the

database schema. This describes how real world entities are modeled in the database. A database schema

specifies, based on the database administrator's knowledge of possible applications, the facts that can enter

the database, or those of interest to the possible end-users. The notion of a database schema plays the same

role as the notion of theory in predicate calculus. A model of this ―theory‖ closely corresponds to a

database, which can be seen at any instant of time as a mathematical object. Thus a schema can contain

formulas representing integrity constraints specifically for an application and the constraints specifically

for a type of database, all expressed in the same database language. In a relational database, the schema

defines

the tables, fields, relationships, views, indexes, packages, procedures, functions,queues, triggers, types, se

quences, materialized views, synonyms, database links, directories, Java, XML schemas, and other

elements.

Schemata are generally stored in a data dictionary. Although a schema is defined in text database

language, the term is often used to refer to a graphical depiction of the database structure. In other words,

schema is the structure of the database that defines the objects in the database.

Types of Schema:

1. Snowflake Schema: A snowflake schema is a logical arrangement of tables in a multidimensional

database such that the relationship diagram resembles a snowflake in shape. The snowflake

schema is represented by centralized fact tables which are connected to multiple dimensions. The

snowflake schema is similar to the star schema. However, in the snowflake schema, dimensions

are normalized into multiple related tables, whereas the star schema's dimensions are demoralized

with each dimension represented by a single table. A complex snowflake shape emerges when the

dimensions of a snowflake schema are elaborate, having multiple levels of relationships, and the

child tables have multiple parent tables ("forks in the road"). The "snow flaking" effect only affects

the dimension tables and NOT the fact tables.

Page 5: DMW Lab File Work

Page | 5

Snowflake Schema

2. Fact Constellation Schema: For each star schema or snowflake schema it is possible to construct a

fact constellation schema. This schema is more complex than star or snowflake architecture, which

is because it contains multiple fact tables. This allows dimension tables to be shared amongst many

fact tables. That solution is very flexible, however it may be hard to manage and support. The main

disadvantage of the fact constellation schema is a more complicated design because many variants of

aggregation must be considered. In a fact constellation schema, different fact tables are explicitly

assigned to the dimensions, which are for given facts relevant. This may be useful in cases when

some facts are associated with a given dimension level and other facts with a deeper dimension

level. Use of that model should be reasonable when for example, there is a sales fact table (with

details down to the exact date and invoice header id) and a fact table with sales forecast which is

calculated based on month, client id and product id.

In that case using two different fact tables on a different level of grouping is realized through a fact

constellation model.

Fact Constellation Schema

Page 6: DMW Lab File Work

Page | 6

Q 4. What is Extract, Transform, Load [ETL] in data warehousing?

Ans. Extract, transform and load (ETL) is a process in database usage and especially in data

warehousing that involves:

1. Extracting data from outside sources

2. Transforming it to fit operational needs (which can include quality levels)

3. Loading it into the end target (database or data warehouse).

Extract: The first part of an ETL process involves extracting the data from the source systems. Most data

warehousing projects consolidate data from different source systems. Each separate system may also use a

different data organization/format. Common data source formats are relational databases and flat files, but

may include non-relational database structures such as Information Management System (IMS) or other

data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method

(ISAM), or even fetching from outside sources such as through web spidering or screen-scraping. The

streaming of the extracted data source and load on-the-fly to the destination database is another way of

performing ETL when no intermediate data storage is required. In general, the goal of the extraction phase

is to convert the data into a single format which is appropriate for transformation processing.

Transform: The transform stage applies a series of rules or functions to the extracted data from the source

to derive the data for loading into the end target.

Load: The load phase loads the data into the end target, usually the data warehouse (DW). Depending on

the requirements of the organization, this process varies widely. Some data warehouses may overwrite

existing information with cumulative information, frequently updating extract data is done on daily,

weekly or monthly basis. Other DW (or even other parts of the same DW) may add new data in a

historicized form, for example, hourly. To understand this, consider a DW that is required to maintain

sales records of the last year. Then, the DW will overwrite any data that is older than a year with newer

data. However, the entry of data for any one year window will be made in a historicized manner. The

timing and scope to replace or append are strategic design choices dependent on the time available and

the business needs. More complex systems can maintain a history and audit trail of all changes to the data

loaded in the DW.

Page 7: DMW Lab File Work

Page | 7

Q 5. Explain Multidimensional Cube.

Ans. Multidimensional cube is a data structure that allows fast analysis of data. It can also be defined as

the capability of manipulating and analyzing data from multiple perspectives. The arrangement of data

into cubes overcomes some limitations of relational databases.

Multidimensional cubes can be thought of as extensions to the two-dimensional array of a spreadsheet.

For example a company might wish to analyze some financial data by product, by time-period, by city, by

type of revenue and cost, and by comparing actual data with a budget. These additional methods of

analyzing the data are known as dimensions. Because there can be more than three dimensions in an

OLAP system the term hypercube is sometimes used.

Multidimensional cube

Functionality:

The Multidimensional cube consists of numeric facts called measures which are categorized

by dimensions. The cube metadata (structure) may be created from a star schema or snowflake schema of

tables in a relational database. Measures are derived from the records in the fact table and dimensions are

derived from the dimension tables.

Page 8: DMW Lab File Work

Page | 8

Q 6. What is Clustering? Explain the types of Clustering.

Ans. Clustering can be considered the most important unsupervised learning problem; so, as every other

problem of this kind, it deals with finding a structure in a collection of unlabeled data. A loose definition

of clustering could be ―the process of organizing objects into groups whose members are similar in some

way‖. A cluster is therefore a collection of objects which are ―similar‖ between them and are ―dissimilar‖

to the objects belonging to other clusters. We can show this with a simple graphical example:

Clustering

In this case we easily identify the 4 clusters into which the data can be divided; the similarity criterion

is distance: two or more objects belong to the same cluster if they are ―close‖ according to a given

distance (in this case geometrical distance). This is called distance-based clustering.

Another kind of clustering is conceptual clustering: two or more objects belong to the same cluster if this

one defines a concept common to all that objects. In other words, objects are grouped according to their fit

to descriptive concepts, not according to simple similarity measures.

Types of Clustering:

1. DBSCAN: It is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg

Sander and Xiaowei Xu in 1996.It is a density-based clustering algorithm because it finds a

number of clusters starting from the estimated density distribution of corresponding nodes.

DBSCAN is one of the most common clustering algorithms and also most cited in scientific

literature.

2. Partitioning Method: Partitioning is the distribution of a table over multiple sub tables that may

reside on different databases or servers in order to improve read/write performance. SQL Server

partitioning is typically done at the table level, and a database is considered partitioned when

Page 9: DMW Lab File Work

Page | 9

groups of related tables have been distributed. Tables are normally partitioned horizontally or

vertically. The following tip will help you understand these SQL Server partitioning methods and

determine when to use one over the other.

3. Hierarchical Method: In hierarchical clustering is a method of cluster analysis which seeks to

build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:

Agglomerative: This is a "bottom up" approach: each observation starts in its own cluster, and

pairs of clusters are merged as one moves up the hierarchy.

Divisive: This is a "top down" approach: all observations start in one cluster, and splits are

performed recursively as one moves down the hierarchy.

4. GRID Based Method: The grid-based methods have the fastest processing time that typically

depends on the size of the grid instead of the data objects. These methods use a single uniform grid

mesh to partition the entire problem domain into cells and the data objects located within a cell are

represented by the cell using a set of statistical attributes from the objects.

Clustering

5. Model Based Method: Model-based clustering techniques can be traced at least as far back as

Wolfe (1963). In more recent years model-based clustering has appeared in the statistics literature

with increased frequency. Typically the data are clustered using some assumed mixture modeling

structure. Then the group memberships are ‗learned‘ in an unsupervised fashion.

Page 10: DMW Lab File Work

Page | 10

Q 7. Explain Weka Tool Software.

Ans. The Weka (Waikato Environment for Knowledge Analysis) workbench contains a collection of

visualization tools and algorithms for data analysis and predictive modeling, together with graphical user

interfaces for easy access to this functionality. The original non-Java version of Weka was a

TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming

languages, plus data preprocessing utilities in C, and a Make file-based system for running machine

learning experiments. This original version was primarily designed as a tool for analyzing data from

agricultural domains, but the more recent fully Java-based version (Weka 3), for which development

started in 1997, is now used in many different application areas, in particular for educational purposes and

research. Advantages of Weka include:

1. Freely availability under the GNU General Public License

2. Portability, since it is fully implemented in the Java programming language and thus runs on

almost any modern computing platform

3. A comprehensive collection of data preprocessing and modeling techniques

4. Ease of use due to its graphical user interfaces

File format used by Weka are:-

1. AREF File Format (Attribute Relation File Format).

2. XRFF File Format (XML Redut File Format Attribute).

Weka supports several standard data mining tasks, more specifically,

data preprocessing, clustering, classification, regression, visualization, andfeature selection. All of Weka's

techniques are predicated on the assumption that the data is available as a single flat file or relation, where

each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but

some other attribute types are also supported). Weka provides access to SQL databases using Java

Database Connectivity and can process the result returned by a database query. It is not capable of multi-

relational data mining, but there is separate software for converting a collection of linked database tables

into a single table that is suitable for processing using Weka. Another important area that is currently not

covered by the algorithms included in the Weka distribution is sequence modeling.

Weka's main user interface is the Explorer, but essentially the same functionality can be accessed through

the component-based Knowledge Flow interface and from the command line. There is also

the Experimenter, which allows the systematic comparison of the predictive performance of Weka's

machine learning algorithms on a collection of datasets.

The Explorer interface has several panels that give access to the main components of the workbench:

Page 11: DMW Lab File Work

Page | 11

1. The Preprocess panel has facilities for importing data from a database, a CSV file, etc., and for

preprocessing this data using a so-called filtering algorithm. These filters can be used to transform

the data (e.g., turning numeric attributes into discrete ones) and make it possible to delete instances

and attributes according to specific criteria.

2. The Classify panel enables the user to apply classification and regression algorithms

(indiscriminately called classifiers in Weka) to the resulting dataset, to estimate the accuracy of the

resulting predictive model, and to visualize erroneous predictions, ROC curves, etc., or the model

itself (if the model is amenable to visualization like, e.g., a decision tree).

3. The Associate panel provides access to association rule learners that attempt to identify all

important interrelationships between attributes in the data.

4. The Cluster panel gives access to the clustering techniques in Weka, e.g., the simple k-

means algorithm. There is also an implementation of the expectation maximization algorithm for

learning a mixture of normal distributions.

5. The Select attributes panel provides algorithms for identifying the most predictive attributes in a

dataset.

6. The Visualize panel shows a scatter plot matrix, where individual scatter plots can be selected and

enlarged, and analyzed further using various selection operators.

Page 12: DMW Lab File Work

Page | 12

Page 13: DMW Lab File Work

Page | 13

Q 8. Describe Attribute Relation File Format [AREF].

Ans. ARFF files have two distinct sections. The first section is the Header information, which is followed

the Data information.

The Header of the ARFF file contains the name of the relation, a list of the attributes (the columns in the

data), and their types. An example header on the standard IRIS dataset looks like this:

% 1. Title: Iris Plants Database

%

% 2. Sources:

% (a) Creator: R.A. Fisher

% (b) Donor: Michael Marshall (MARSHALL%[email protected])

% (c) Date: July, 1988

%

@RELATION iris

@ATTRIBUTE sepallength NUMERIC

@ATTRIBUTE sepalwidth NUMERIC

@ATTRIBUTE petallength NUMERIC

@ATTRIBUTE petalwidth NUMERIC

@ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica}

The Data of the ARFF file looks like the following:

@DATA

5.1,3.5,1.4,0.2,Iris-setosa

4.9,3.0,1.4,0.2,Iris-setosa

4.7,3.2,1.3,0.2,Iris-setosa

4.6,3.1,1.5,0.2,Iris-setosa

5.0,3.6,1.4,0.2,Iris-setosa

5.4,3.9,1.7,0.4,Iris-setosa

4.6,3.4,1.4,0.3,Iris-setosa

5.0,3.4,1.5,0.2,Iris-setosa

4.4,2.9,1.4,0.2,Iris-setosa

4.9,3.1,1.5,0.1,Iris-setosa

Lines that begin with a % are comments.

The @RELATION, @ATTRIBUTE and @DATA declarations are case insensitive.

Examples

Several well-known machine learning datasets are distributed with Weka in the $WEKAHOME/data

directory as ARFF files.

Page 14: DMW Lab File Work

Page | 14

The ARFF Header Section

The ARFF Header section of the file contains the relation declaration and attribute declarations.

The @relation Declaration

The relation name is defined as the first line in the ARFF file. The format is:

@relation <relation-name>

where <relation-name> is a string. The string must be quoted if the name includes spaces.

The @attribute Declarations

Attribute declarations take the form of an orderd sequence of @attribute statements. Each attribute in the

data set has its own @attribute statement which uniquely defines the name of that attribute and it's data

type. The order the attributes are declared indicates the column position in the data section of the file. For

example, if an attribute is the third one declared then Weka expects that all that attributes values will be

found in the third comma delimited column.

The format for the @attribute statement is:

@attribute <attribute-name> <datatype>

where the <attribute-name> must start with an alphabetic character. If spaces are to be included in the

name then the entire name must be quoted.

The <datatype> can be any of the four types currently (version 3.2.1) supported by Weka:

numeric

<nominal-specification>

string

date [<date-format>]

where <nominal-specification> and <date-format> are defined below. The

keywords numeric, string and date are case insensitive.

Numeric attributes

Numeric attributes can be real or integer numbers.

Nominal attributes

Nominal values are defined by providing an <nominal-specification> listing the possible values:

{<nominal-name1>, <nominal-name2>, <nominal-name3>, ...}

Page 15: DMW Lab File Work

Page | 15

For example, the class value of the Iris dataset can be defined as follows:

@ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica}

Values that contain spaces must be quoted.

String attributes

String attributes allow us to create attributes containing arbitrary textual values. This is very useful in text-

mining applications, as we can create datasets with string attributes, then write Weka Filters to manipulate

strings (like StringToWordVectorFilter). String attributes are declared as follows:

@ATTRIBUTE LCC string

Date attributes

Date attribute declarations take the form:

@attribute <name> date [<date-format>]

where <name> is the name for the attribute and <date-format> is an optional string specifying how date

values should be parsed and printed (this is the same format used by SimpleDateFormat). The default

format string accepts the ISO-8601 combined date and time format: "yyyy-MM-dd'T'HH:mm:ss".

Dates must be specified in the data section as the corresponding string representations of the date/time

(see example below).

ARFF Data Section

The ARFF Data section of the file contains the data declaration line and the actual instance lines.

The @data Declaration

The @data declaration is a single line denoting the start of the data segment in the file. The format is:

@data

The instance data

Each instance is represented on a single line, with carriage returns denoting the end of the instance.

Page 16: DMW Lab File Work

Page | 16

Attribute values for each instance are delimited by commas. They must appear in the order that they were

declared in the header section (i.e. the data corresponding to the nth @attribute declaration is always the

nth field of the attribute).

Missing values are represented by a single question mark, as in:

@data

4.4,?,1.5,?,Iris-setosa

Values of string and nominal attributes are case sensitive, and any that contain space must be quoted, as

follows:

@relation LCCvsLCSH

@attribute LCC string

@attribute LCSH string

@data

AG5, 'Encyclopedias and dictionaries.;Twentieth century.'

AS262, 'Science -- Soviet Union -- History.'

AE5, 'Encyclopedias and dictionaries.'

AS281, 'Astronomy, Assyro-Babylonian.;Moon -- Phases.'

AS281, 'Astronomy, Assyro-Babylonian.;Moon -- Tables.'

Dates must be specified in the data section using the string representation specified in the attribute

declaration. For example:

@RELATION Timestamps

@ATTRIBUTE timestamp DATE "yyyy-MM-dd HH:mm:ss"

@DATA

"2001-04-03 12:12:12"

"2001-05-03 12:59:55"

Sparse ARFF files

Sparse ARFF files are very similar to ARFF files, but data with value 0 are not be explicitly represented.

Sparse ARFF files have the same header (i.e @relation and @attribute tags) but the data section is

different. Instead of representing each value in order, like this:

@data

Page 17: DMW Lab File Work

Page | 17

0, X, 0, Y, "class A"

0, 0, W, 0, "class B"

the non-zero attributes are explicitly identified by attribute number and their value stated, like this:

@data

{1 X, 3 Y, 4 "class A"}

{2 W, 4 "class B"}

Each instance is surrounded by curly braces, and the format for each entry is: <index> <space> <value>

where index is the attribute index (starting from 0).

Page 18: DMW Lab File Work

Page | 18

Q 9. Explain KNN of Graph.

Ans. The nearest neighbor graph (NNG) for a set of n objects P in a metric space (e.g., for a set of

points in the plane with Euclidean distance) is a directed graph with P being its vertex set and with

a directed edge from p to q whenever q is a nearest neighbor of p (i.e., the distance from p to q is no larger

than from p to any other object from P).

In many discussions the directions of the edges are ignored and the NNG is defined as an ordinary

(undirected) graph. However, the nearest neighbor relation is not a symmetric one, i.e.,p from the

definition is not necessarily a nearest neighbor for q.

In some discussions, in order to make the nearest neighbor for each object unique, the set P is indexed and

in the case of a tie the object with, e.g., the largest index is taken for the nearest neighbor.

The k-nearest neighbor graph (k-NNG) is a graph in which two vertices p and q are connected by an

edge, if the distance between p and q is among the k-th smallest distances from pto other objects from P.

The NNG is a special case of the k-NNG, namely it is the 1-NNG. k-NNGs obey a separator theorem: they

can be partitioned into two subgraphs of at mostn(d + 1)/(d + 2) vertices each by the removal of

O(k1/d

n1 − 1/d

) points.

Another special case is the (n − 1)-NNG. This graph is called the farthest neighbor graph (FNG).

In theoretical discussions of algorithms a kind of general position is often assumed, namely, the nearest

(k-nearest) neighbor is unique for each object. In implementations of the algorithms it is necessary to bear

in mind that this is not always the case.