25
Presented By : Aamir Mushtaq Jesal Mistry Kapil Tekwani Neville Shah Visual Representation of Knowledge Articles as Dynamic Interactive Connected Graph Nodes 1 Internal Guide: Prof. Mrs. Kalyani Waghmare External Guides: Mr. Prajwalit Bhopale Mr. Kiran Kulkarni Sponsored Organization: Infinitely Beta

For project

Embed Size (px)

DESCRIPTION

this presentation is for Project

Citation preview

Page 1: For project

Presented By:

Aamir Mushtaq

Jesal Mistry

Kapil Tekwani

Neville Shah

Visual Representation of Knowledge Articles as Dynamic Interactive Connected Graph Nodes

1

Internal Guide: Prof. Mrs. Kalyani WaghmareExternal Guides: Mr. Prajwalit Bhopale Mr. Kiran KulkarniSponsored Organization: Infinitely Beta

Page 2: For project

2

Introduction Problem Definition ACM Keywords Motivation of the Project Algorithm used System Flow Diagram System Architecture Mathematical Model Feasibility Analysis System UML Diagrams Main Modules Technologies Used Proposed UI Restrictions, Limitations & Constraints References Paper Publications

Overview

Page 3: For project

Introduction

3

• Online knowledge articles have become increasingly popular

• Eg - Wikipedia is used by students, educators, professionals etc

• Problem faced:

• Article topics to be studied are not easy to understand

• Take too much time

• Have too much content

• Possible solution: Create a graphical visualization of knowledge articles.

• Enables users to obtain an easily understandable overview of an article

In this project we present an innovative technique for visualization of content and contextual information of Webpages for an effective browsing experience.

Page 4: For project

Problem Definition

4

To implement an Easy and Interactive E- Learning Tool for Knowledge Articles. It will be implemented as a browser plugin which will represent a graphical view of the document in the form of graphical nodes with main node focusing on keyword for which we want to gain information and neighboring nodes representing keywords that are most prominently related to the searched keyword/keyword about which information is to be obtained. In addition to that, we have semantic links between the nodes where the edges represent the relation.

Page 5: For project

5

H. Information Systems H.2 Database Management

H.2.8 Database Applications Data Mining

H.3 Information Storage and Retrieval H.3.3 Information Search and Retrieval

Information filtering Query formulation

H.3.5 Online Information Services Web based services

H.5 Information Interfaces and Presentation H.5.4 Hypertext/Hypermedia

Architecture Navigation

I. Computing Methodologies I.2 Artificial Intelligence

I.2.7 Natural Language Processingo Text Analysis

I.2.8 Problem Solving, Control Methods & Search Dynamic Programming Graph & Tree Search Strategies

ACM Keywords

Page 6: For project

Motivation of the Project

6

Project Motivation: Provide a user friendly solution to problem mentioned in introduction Project overall saves man hours (a picture is worth a thousand words) Visualization and interactivity enhances interest and makes learning fun Knowledge articles assimilated easily and quickly. Overview of a topic obtained with minimum reading Time spent reading minimised

Personal Motivation Learn new technologies Learn SDLC Project management skills Recognition and rewards

Page 7: For project

7

Algorithm Used

1. Input to system = URL of Wikipedia article.

3. Select the document from Wikipedia dump / Scrape corresponding to the input URL. (document = Natural language words + Keywords + Links)

5. Eliminate Natural language words.

7. Count section-wise occurrences of keywords, store using tables and calculate weight.Ex: weight of particular keyword in doc = 0.7*cs1+0.5*cs2+0.3*cs3

9. Create a table for Links in that document, if there is a link for a particular keyword it will add to the weight of that keyword.

11. Create a threshold for keywords or links to be displayed based on weight.

Page 8: For project

8

Algorithm Used (cont’d)

7. Depending on current depth, pre-decided window size to select top keyword/links for next level.Example: 20 for 0th level, 10 for 1st level, 5 for 2nd level.(tuning required)

8. For efficient searching of accurate data we will be working across the depth i.e. at next levels if the keyword (present in previous level doc) is occurring many times (say 100), it will add weight to the corresponding keyword in the previous table.

9 Output will be graphical representation of keywords.If node (keyword) is a link, it will be connected to another node (keyword) of next level else stop at that level.

Page 9: For project

9

System flow diagram

Page 10: For project

10

System Architecture

Page 11: For project

11

Mathematical ModelLet S be the system. S = {Uinp, U, D, Q, Wt, Kw, TKw,S , TKw,Wg , TU,Kw , TU,Kw,Wg} Uinp = URL identifier (input to the system) D = database of the WWW, containing webpages as documents di.D = {d1, d2, d3,..., dn} where di is a WWW document (webpage). Q = set of all possible queries.Q = {q1, q2, q3,..., qn} where qi is any given query to be fired on the database.

Wt = set of words of a particular document.Wt = {w1, w2,..., wn} where wi dϵ i, for 1<= i <= n Kw = set of keywords W⊆ t, obtained after Fel

Kw = {k1, k2,…, km} where ki W⊆ t, for 1<= i <= m U = extracted URLs from document di

U = {u1, u2,..., un} where ui dϵ i

Page 12: For project

12

Mathematical Model

TKw,S = table of keywords and sectional counts, obtained after Fcnt

TKw,S = {<k1, sA1, sB1, sC1>, <k2, sA2, sB2, sC2>, …, <km, sA3, sB3, sC3>}

TKw,Wg = table of keywords and associated weights, obtained after Fw

TKw,Wg = {<k1,wg1>, <k2, wg2>, … ,<km, wgm>} TU,Kw = table of urls in U mapped with the keywords and weights table TKw,Wg obtained after Fmap

TU,Kw = {<un­, km, wgm >} Ut is a mapping of keywords and their respective <U>

Page 13: For project

13

Mathematical Model

Fel (WT{<w1, w2, ... , wn>}) = KW

Fel eliminates all natural language elements from the <WT> part and resultant set of words are the keywords that are identified in the <KW> list / set.

Fcnt ( Kw {<k1, k2, ... , kn >}) = TKw,S

Fcnt returns an array of tuples of keywords and their respective sectional counts {<km, s1, s2, s3>} which would be used in the calculation of weights of keywords. And provide the TKw,S as input of Fw .

Fw ( TKw,S {<km, sAm, sB m, sC

m >}) = TKw,Wg

Fw takes the TKw,S obtained by the function Fcnt as input and calculates the weight associated with each keyword and returns array of tuples of keywords and weights. {<km, wgm>}

Fmap ( U{<u1, u2, … un>} ,TKw,Wg{<k1,wg1>, <k2,wg2> ,…,<km, wgm>}) = TU,Kw,Wg

Fmap takes the U< u1, u2...un > and TKw,Wg <km, wgm> as input and it maps the keywords with the respective Urls in the di and returns an array of urls with their mapped keywords and Urls.

Fwin (lvl) = {<5> v <10>v

<20>}Fwin is a window function that returns the size of the

window that is dependent on the depth/ level that we are in.

Functions:

Page 14: For project

14

Feasibility Analysis

NP – Hard: Number of keywords and links not known while scanning wiki

Processing power at server not determined in advance

Ranking algorithm exponential in nature

Solution not determined in polynomial time

NP – Complete: Assign ranks to keywords and links, using ranking algorithm

Use threshold value to limit links

Approx. processing power calculated to scan documents

Thus converted NP – Hard to NP – Complete

Page 15: For project

15

Main Modules Extraction Module:

URL and keywords from base and sub documents

Use frequency and inverse frequency calculations.

Keyword Analysis Module: Implement ranking algorithm on keywords

Keywords are identified and listed in the relevant tables.

Linking Module: Check if keywords linked to URL’s

If linking is not done, then there will be duplicate entries of both URL’s and the keywords.

Graphical Module: Displays the entire wiki page as dynamic connected graph

Dynamic capability of changing root node

Show text snippet contained inside the node.

Page 16: For project

16

Technologies Used

• Python – scripting language

• MathML – Mathematical Markup Language

• Gremlin or Neo4J for graph operations and storage

• Ajax – client side scripting

• Git – Distributed Revision Control System

• Python’s NLTK libraries for Natural Language Processing

• Unix Shell

Page 17: For project

17

Proposed UI – shows the output of a search

Page 18: For project

18

Restrictions, Limitations & Constraints

We will be limiting our software to search for keywords and links to a maximum depth of 3 levels including root level.

There will be a limitation on links or keywords that will be chosen for further processing.

Dump has to upgrade for every new release. In case of articles about natural language words, the NLP will itself eliminate

those words. In case of small articles, relevant keywords may not be properly found.

Page may not contain definite number of links.

Page 19: For project

19

References Schonhofen, P.; "Identifying Document Topics Using the Wikipedia Category Network,"

Web Intelligence, 2006. WI 2006. IEEE/WIC/ACM International Conference on , vol., no., pp.456-462, 18-22 Dec. 2006

Lamberti, F.; Sanna, A.; Demartini, C.; , "A Relation-Based Page Rank Algorithm for Semantic Web Search Engines," Knowledge and Data Engineering, IEEE Transactions on , vol.21, no.1, pp.123-136, Jan. 2009

Alani, H.; Sanghee Kim; Millard, D.E.; Weal, M.J.; Hall, W.; Lewis, P.H.; Shadbolt, N.R.; , "Automatic ontology-based knowledge extraction from Web documents," Intelligent Systems, IEEE , vol.18, no.1, pp. 14- 21, Jan-Feb 2003

Schindler, M.; Vrandecic, D.; , "Introducing New Features to Wikipedia: Case Studies for Web Science," Intelligent Systems, IEEE , vol.26, no.1, pp.56-61, Jan.-Feb. 2011

Cheong-Iao Pang; Biuk-Aghai, R.P.; , "Map-like Wikipedia overview visualization," Collaboration Technologies and Systems (CTS), 2011 International Conference on , vol., no., pp.53-60, 23-27 May 2011

Boukhelifa, N.; Chevalier, F.; Fekete, J.; , "Real-time aggregation of Wikipedia data for visual analytics," Visual Analytics Science and Technology (VAST), 2010 IEEE Symposium on , vol., no., pp.147-154, 25-26 Oct. 2010

Prato, A.; Ronchetti, M.; , "Using Wikipedia as a Reference for Extracting Semantic Information from a Text," Advances in Semantic Processing, 2009. SEMAPRO '09. Third International Conference on , vol., no., pp.56-61, 11-16 Oct. 2009

Page 20: For project

20

References

Taneja, Harmunish; Gupta, Richa; , "Web Information Retrieval Using Query Independent Page Rank Algorithm," Advances in Computer Engineering (ACE), 2010 International Conference on , vol., no., pp.178-182, 20-21 June 2010

Pirrone, R.; Pipitone, A.; Russo, G.; “Semantic sense extraction from Wikipedia pages,” Human System Interactions (HSI), 2010 3rd Conference on, vol., no., pp. 543-547, 13-15 May 2010

Wikipedia data from Wikipedia links: http://stats.wikimedia.org/EN/TablesPageViewsMonthlyCombined.htm

Wikipedia database download in xml format: http://dumps.wikimedia.org/ derived from http://en.wikipedia.org/wiki/wikipedia:Database_Download

Wikitools from mediaWiki in url: http://en.wikipedia.org/wiki/MediaWiki

Wikipedia Categorization from Wikipedia website: http://en.wikipedia.org/wiki/Wikipedia:Categorization

Page 21: For project

21

Paper Publications

Paper Title:

Visual Representation of Knowledge Articles as Dynamic Interactive Connected Graph Nodes.

Name of Conference where paper submitted:European Modeling Symposium 2011, EMS2011Informatics and Computational Intelligence 2011, ICI2011Education and e-learning conference 2011, EeL2011

Name of Conference where paper Accepted:European Modeling Symposium 2011, EMS2011Informatics and Computational Intelligence 2011, ICI2011Education and e-learning conference 2011, EeL2011

Name of Journal where paper Accepted:International Foundation for Modern Education and Scientific Research (INFOMESR)

Page 22: For project

22

Backward References:

1. Schonhofen, P.; "Identifying Document Topics Using the Wikipedia Category Network," Web Intelligence, 2006. WI 2006. IEEE/WIC/ACM International Conference on , vol., no., pp.456-462, 18-22 Dec. 2006

2. Cheong-Iao Pang; Biuk-Aghai, R.P.; , "Map-like Wikipedia overview visualization," Collaboration Technologies and Systems (CTS), 2011 International Conference on , vol., no., pp.53-60, 23-27 May 2011

3. Boukhelifa, N.; Chevalier, F.; Fekete, J.; , "Real-time aggregation of Wikipedia data for visual analytics," Visual Analytics Science and Technology (VAST), 2010 IEEE Symposium on , vol., no., pp.147-154, 25-26 Oct. 2010

4. Lamberti, F.; Sanna, A.; Demartini, C.; , "A Relation-Based Page Rank Algorithm for Semantic Web Search Engines," Knowledge and Data Engineering, IEEE Transactions on , vol.21, no.1, pp.123-136, Jan. 2009

5. Prato, A.; Ronchetti, M.; , "Using Wikipedia as a Reference for Extracting Semantic Information from a Text," Advances in Semantic Processing, 2009. SEMAPRO '09. Third International Conference on , vol., no., pp.56-61, 11-16 Oct. 2009

Paper Publications

Page 23: For project

23

Forward References:

• Cheong-Iao Pang; Biuk-Aghai, R.P.; , "Map-like Wikipedia overview visualization," Collaboration Technologies and Systems (CTS), 2011 International Conference on , vol., no., pp.53-60, 23-27 May 2011.

• Pirrone, R.; Pipitone, A.; Russo, G.; “Semantic sense extraction from Wikipedia pages,” Human System Interactions (HSI), 2010 3rd Conference on, vol., no., pp. 543-547, 13-15 May 2010

• Wikipedia data from Wikipedia links: http://stats.wikimedia.org/EN/TablesPageViewsMonthlyCombined.htm

• Wikipedia database download in xml format: http://dumps.wikimedia.org/ derived from http://en.wikipedia.org/wiki/wikipedia:Database_Download

• Wikitools from mediaWiki in url: http://en.wikipedia.org/wiki/MediaWiki

• Wikipedia Categorization from Wikipedia website: http://en.wikipedia.org/wiki/Wikipedia:Categorization

Paper Publications

Page 24: For project

24

Any Questions?

Page 25: For project

25

Thank You