Xavier Ochoa, ESPOL
Erik Duval, KULeuven
Context of the Research
Learnometrics• Study empirical regularities on data• Develop mathematical models• To understand the influence/impact of LO
• Produce useful metrics
Example of LearnometricsNumber of Downloads does not depends
on number of Object Published
Example of Learnometrics 2The Download of objects follows a
Power Distribution
More than Learning Object Metadata
• All information about Learning Objects– Object Itself– LOM / DC / MPEG7– Contextual Attention Metadata (CAM)– Sequencing Information (SCORM / LAMS)
Uses of Learning Object Metadata Metrics
• To improve Learning Object Tools– Indexing Material
• LOM Quality Metrics
– Searching / Finding• Ranking Metrics • Recommendation Metrics
– Reuse• Adaptation Metrics
Learning Object Metadata Quality
The production, management and consumption of Learning Object
Metadata is vastly surpassing the human capacity to review or process these
metadata.
LOM Quality Metrics
Evaluation LOM Quality MetricsTextual Information Content correlates
highly with human-assigned quality score
LOM Quality Visualization
Ranking Metrics
• Network-Analysis Rank (Popularity)– Most users prefer these objects…
• Similarity Recommendation (Clustering)– If you like this LO, you will also like …
• Personalized Rank (Profiling)– Based on your history, you will like these objects…
• Contextual Recommendation Rank– This object seems right for the lesson you are
creating right now…
Network-Analysis Metrics
• CAM as K-Partite Graph
O 1
O 2
O 3
C 1
C 2
U 1 U 2
A 1
A 2
User Partition
Course Partition Author Partition
Object Partition
Application
Similarity Metric
U1
U2
U3
O1
O2
O3
U4
U5
U6
U1
U2
U3
U4
U6
U5
2-Partite Graph (User and Objects) Folded Normal Graph (Users)
Communities ARIADNE
Application
Personalized Rank
• We can create a profile of the user based on its CAM
• We can use the same LOM record to store this profile
• Instead of having a crisp preference for a value, the user will have a fuzzy set with different degrees of “preference” for all the possible values.
Personalized RankTopic Importance = 0.9
Language Importance = 0.6
U1 = {(0.8/ComputerScience + 0.2/Physics), (0.6/English + 0.2/Spanish + 0.2/French)}
O1 = {(1.0/ComputerScience), (1.0/Spanish)}
O2 = {(1.0/Physics, 1.0/English)}
Rank(O1) = 0.9*0.8 + 0.6*0.2 = 0.84
Rank(O2) = 0.9*0.2 + 0.6*0.6 = 0.54
Contextual Recommending
• If the CAM is considered not only as a source for historic data, but also as a continuous stream of contextualized attention information.
• LMSs could provide much more contextual information.
• Use techniques to exploit contextual information. Most simple: Term Extraction
Evaluation
• Experimentation– Ranking vs. No Ranking– Different Ranking Strategies/Combinations
• User feedback– Machine Learning – Optimization
• Transference– Other reusable components
Research Questions (Summary)
• How information about Learning Objects (Learning Object, LOM, CAM, SCORM) can be used to create a relevance/quality metrics to rank/recommend Learning Objects?
• Are the resulting metrics feasible to calculate, easy to integrate in existing applications and meaningful/useful for the end users?
• Can these metrics be also applied to other reusable components?
Thank you, GraciasComments, Suggestions, Critics… are
Welcome!
More Information:http://ariadne.cti.espol.edu.ec/M4M