24
Xavier Ochoa, ESPOL Erik Duval, KULeuven

Metrics For Learning Object Metadata

Embed Size (px)

DESCRIPTION

ECTEL2006 Doctoral Consortium presentation about my research in Metrics for Learning Object Metadata. More information: http://ariadne.cti.espol.edu.ec/Learnometrics

Citation preview

Page 1: Metrics For Learning Object Metadata

Xavier Ochoa, ESPOL

Erik Duval, KULeuven

Page 2: Metrics For Learning Object Metadata

Context of the Research

Page 3: Metrics For Learning Object Metadata

Learnometrics• Study empirical regularities on data• Develop mathematical models• To understand the influence/impact of LO

• Produce useful metrics

Page 4: Metrics For Learning Object Metadata

Example of LearnometricsNumber of Downloads does not depends

on number of Object Published

Page 5: Metrics For Learning Object Metadata

Example of Learnometrics 2The Download of objects follows a

Power Distribution

Page 6: Metrics For Learning Object Metadata

More than Learning Object Metadata

• All information about Learning Objects– Object Itself– LOM / DC / MPEG7– Contextual Attention Metadata (CAM)– Sequencing Information (SCORM / LAMS)

Page 7: Metrics For Learning Object Metadata

Uses of Learning Object Metadata Metrics

• To improve Learning Object Tools– Indexing Material

• LOM Quality Metrics

– Searching / Finding• Ranking Metrics • Recommendation Metrics

– Reuse• Adaptation Metrics

Page 8: Metrics For Learning Object Metadata

Learning Object Metadata Quality

The production, management and consumption of Learning Object

Metadata is vastly surpassing the human capacity to review or process these

metadata.

Page 9: Metrics For Learning Object Metadata

LOM Quality Metrics

Page 10: Metrics For Learning Object Metadata

Evaluation LOM Quality MetricsTextual Information Content correlates

highly with human-assigned quality score

Page 11: Metrics For Learning Object Metadata

LOM Quality Visualization

Page 12: Metrics For Learning Object Metadata
Page 13: Metrics For Learning Object Metadata

Ranking Metrics

• Network-Analysis Rank (Popularity)– Most users prefer these objects…

• Similarity Recommendation (Clustering)– If you like this LO, you will also like …

• Personalized Rank (Profiling)– Based on your history, you will like these objects…

• Contextual Recommendation Rank– This object seems right for the lesson you are

creating right now…

Page 14: Metrics For Learning Object Metadata

Network-Analysis Metrics

• CAM as K-Partite Graph

O 1

O 2

O 3

C 1

C 2

U 1 U 2

A 1

A 2

User Partition

Course Partition Author Partition

Object Partition

Page 15: Metrics For Learning Object Metadata

Application

Page 16: Metrics For Learning Object Metadata

Similarity Metric

U1

U2

U3

O1

O2

O3

U4

U5

U6

U1

U2

U3

U4

U6

U5

2-Partite Graph (User and Objects) Folded Normal Graph (Users)

Page 17: Metrics For Learning Object Metadata

Communities ARIADNE

Page 18: Metrics For Learning Object Metadata

Application

Page 19: Metrics For Learning Object Metadata

Personalized Rank

• We can create a profile of the user based on its CAM

• We can use the same LOM record to store this profile

• Instead of having a crisp preference for a value, the user will have a fuzzy set with different degrees of “preference” for all the possible values.

Page 20: Metrics For Learning Object Metadata

Personalized RankTopic Importance = 0.9

Language Importance = 0.6

U1 = {(0.8/ComputerScience + 0.2/Physics), (0.6/English + 0.2/Spanish + 0.2/French)}

O1 = {(1.0/ComputerScience), (1.0/Spanish)}

O2 = {(1.0/Physics, 1.0/English)}

Rank(O1) = 0.9*0.8 + 0.6*0.2 = 0.84

Rank(O2) = 0.9*0.2 + 0.6*0.6 = 0.54

Page 21: Metrics For Learning Object Metadata

Contextual Recommending

• If the CAM is considered not only as a source for historic data, but also as a continuous stream of contextualized attention information.

• LMSs could provide much more contextual information.

• Use techniques to exploit contextual information. Most simple: Term Extraction

Page 22: Metrics For Learning Object Metadata

Evaluation

• Experimentation– Ranking vs. No Ranking– Different Ranking Strategies/Combinations

• User feedback– Machine Learning – Optimization

• Transference– Other reusable components

Page 23: Metrics For Learning Object Metadata

Research Questions (Summary)

• How information about Learning Objects (Learning Object, LOM, CAM, SCORM) can be used to create a relevance/quality metrics to rank/recommend Learning Objects?

• Are the resulting metrics feasible to calculate, easy to integrate in existing applications and meaningful/useful for the end users?

• Can these metrics be also applied to other reusable components?

Page 24: Metrics For Learning Object Metadata

Thank you, GraciasComments, Suggestions, Critics… are

Welcome!

More Information:http://ariadne.cti.espol.edu.ec/M4M

[email protected]