11
Developing a Collaborative Multimedia mLearning Environment * Dr. Jinan Fiaidhi, Guo T. Song, Dr. Sabah Mohammed and Nathan Epp Department of Computer Science, Lakehead University, Canada. Abstract: The evidence is overwhelming that mobile learning is beginning to take hold. mLearing is a new learning paradigm which enable students to access relevant materials when working on a subject - anytime, anywhere using their hand held devices. This article presents a framework for designing mLearning environment based on JXTA infrastructure and SVG-RDF multimedia learning objects. With capabilities like searching, advertising, SVG trascoding and SVG annotation peers can collaborate to achieve better learning using highly constrained devices. Introduction The need for computing in support of education continues to escalate. Until recently, everyone assumed that educational computing required desktop computers. Today wireless-enabled laptops, PDAs, Tablet PC and PocketPCs make it possible for students to use their time more efficiently, access databases and information from the Internet, and work collaboratively. Through this flexible learning approach, students can succeed in selectively incorporating critical input from their peers and instructor, then revising their documents based on their own interpretation of facts and theory. This technology will soon give students full-time access to computation and wireless connectivity, while expanding where educational computing can take place to the home and field. This is an important equity issue because these computers will provide much of the educational benefit of more expensive computers in an inexpensive format that has many advantages over desktops. Connectivity for these devices will soon be the norm rather than the exception. As they become more functional and more connected, the possibility for completely new and unforeseen application increases. The key to their ultimate utility is whether or not software develops that can leverage the unique physical characteristics of the devices in ways that support positive classroom interaction and collaboration. The research attention to this issue is getting great momentum as it has been termed as new research area of Mobile Learning (mLearning) which utilizes such innovations as wireless communication, Personal Digital Assistants (PDAs), digital content from traditional textbooks and more, providing a dynamic learning environment for students and laying the framework for more exploration into the fusion of education and technology (Wood 2003). There are many mobile networks have been set up at dozens of colleges and universities in order to facilitate collaboration and group-decision making in learning tasks and inter-institutional projects. Initially, wireless classrooms start out as small, limited projects in one or two departments before being adopted by the rest of the university community. Many institutions have set up wireless infrastructures to promote media- * Article Accepted by the 10 th Western Canadian Conference on Computing Education", WCCCE 2005, May 5- 6, 2005, University of Northern British Columbia, UNBC, Prince George, BC, Canada.

Developing a Collaborative Multimedia mLearning Environment

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Developing a Collaborative Multimedia mLearning Environment

Developing a Collaborative Multimedia mLearning Environment*

Dr. Jinan Fiaidhi, Guo T. Song, Dr. Sabah Mohammed and Nathan EppDepartment of Computer Science, Lakehead University, Canada.

Abstract: The evidence is overwhelming that mobile learning is beginning to take hold. mLearing is a new learning paradigm which enable students to access relevant materials when working on a subject - anytime, anywhere using their hand held devices. This article presents a framework for designing mLearning environment based on JXTA infrastructure and SVG-RDF multimedia learning objects. With capabilities like searching, advertising, SVG trascoding and SVG annotation peers can collaborate to achieve better learning using highly constrained devices.

IntroductionThe need for computing in support of education continues to escalate. Until recently, everyone assumed that educational computing required desktop computers. Today wireless-enabled laptops, PDAs, Tablet PC and PocketPCs make it possible for students to use their time more efficiently, access databases and information from the Internet, and work collaboratively. Through this flexible learning approach, students can succeed in selectively incorporating critical input from their peers and instructor, then revising their documents based on their own interpretation of facts and theory. This technology will soon give students full-time access to computation and wireless connectivity, while expanding where educational computing can take place to the home and field. This is an important equity issue because these computers will provide much of the educational benefit of more expensive computers in an inexpensive format that has many advantages over desktops. Connectivity for these devices will soon be the norm rather than the exception. As they become more functional and more connected, the possibility for completely new and unforeseen application increases. The key to their ultimate utility is whether or not software develops that can leverage the unique physical characteristics of the devices in ways that support positive classroom interaction and collaboration.

The research attention to this issue is getting great momentum as it has been termed as new research area of Mobile Learning (mLearning) which utilizes such innovations as wireless communication, Personal Digital Assistants (PDAs), digital content from traditional textbooks and more, providing a dynamic learning environment for students and laying the framework for more exploration into the fusion of education and technology (Wood 2003). There are many mobile networks have been set up at dozens of colleges and universities in order to facilitate collaboration and group-decision making in learning tasks and inter-institutional projects. Initially, wireless classrooms start out as small, limited projects in one or two departments before being adopted by the rest of the university community. Many institutions have set up wireless infrastructures to promote media-rich, collaborative learning environments which include Buena Vista University in Iowa, Carnegie Mellon University in Pennsylvania, Greenville College in Illinois, the College of Mount Saint Joseph in Ohio, the University of Florida, the School of Hotel Administration at Cornell University, Harvard University and Seton Hall University in New Jersey (Olsen 2000). In Canada there are many similar attempts. For example the Mobile Learning consortium (http://www.mcgrawhill.ca/college/mlearning/) which is comprised of post secondary institutes like — Seneca College and the Northern Alberta Institute of Technology (NAIT) — and educational publishing and technology companies. (members include: McGraw-Hill Ryerson, Bell Mobility, a division of Bell Canada; Cap Gemini Ernst & Young,; BlackBoard Inc.,Hewlett-Packard; and Avaya). In fall 2002, approximately 300 first-year students at NAIT and Seneca College received wireless access to curriculum materials for their Introductory Accounting courses, based on Financial Accounting Principles. Selected content for those courses was accessed on an HP iPaq handheld computer operating over the Bell Mobility network. Some of the interactive learning tools available to students included instant messaging and sharing of digital audio/video content.

With such facilities, students learning in small groups is notably enhanced by encourage each other to ask questions, explain and justify their opinions, articulate their reasoning, and elaborate and reflect upon their knowledge. Indeed the recognition of the educational value of student collaboration traditionally has led to the introduction of conventional groupware tools - such as chat, threaded discussions, and email - into distance-learning environments. While these tools can facilitate didactic interactions between learners, they cannot ensure productive learning dialogues between participants and they do not address how to provide as rich

* Article Accepted by the 10th Western Canadian Conference on Computing Education", WCCCE 2005, May 5-6, 2005, University of Northern British Columbia, UNBC, Prince George, BC, Canada.

Page 2: Developing a Collaborative Multimedia mLearning Environment

learning environment that utilizes multimedia for students working within ubiquitous and mobile environment. Today’s collaborative learning and problem-solving environments afford the opportunity to bring together different learners to jointly tackle a problem. A student in one location can connect over the web and interact with students in other locations. Current collaborative environments concentrate on providing communication between participants and tools to facilitate collaborative activities such as shared whiteboards and shared applications. As the use of collaborative environments becomes more ubiquitous, we can expect many of the same problems facing colleagues physically meeting together to arise in cyberspace. Its evolution has recently been accelerated by improved wireless/mobile telecommunications capabilities, open networks, continuous increase in computing power, improved battery technology, and the emergence of flexible peer-to-peer grids architectures.

However, the conventional ubiquitous learning media is based on the textual-based devices that are restricted to obtain selectable information. The new generation technology has been promoted to provide various types of information to meet the personal needs of users at any time. Currently different type of multimedia can be processed by variety of ubiquitous devices making it possible for virtual and collaborative learning environment to be our new mlearning reality. Indeed we can not use the many available packages that are traditionally used for Web-based distant learning (e.g. Blackboard, WebCT, WebFuse, CoSE, TopClass, WebEx, VNC, SCORM, and Tango) because they lack supplying some intrinsic ubiquity capabilities as well as they do not deal reusable open-source learning materials and rely only on the traditional Web-Based infrastructure.

Developing Multimedia Learning Object for mLearning Environment:The Learning Object (LO) model provides a framework for the exchange of learning objects between learning systems’ platforms. If LOs are represented in an independent way, conforming instructional systems can deliver and manage them. The learning object initiatives, such as IEEE’s LOM, Educom’s IMS, or eduSource CanCore are a subset of efforts to creating learning technology standards for such interoperable instructional systems(Friesen 2004). The LO content to be described is normally built of multimedia elements (texts, images, audio, video, animations) which are stored in a modularized way. Actually all of the methods used to specify the LO metadata make use of metadata in the traditional sense of describing the hypermedia for a Web based environment. Several attempts tried to present multimedia learning as scalable media that can work for some ubiquitous environments utilizing potable and pocket PCs, such as BSCW, the Notebook University project, the Courseware Watchdog, and the VEL based on multimedia standards such as MPEG-4, MPEG-21 for animated type and or JPEG2000 for still images. However, such attempts fail short to work for mobile environments because of the limitations of the mobile devices. Indeed, these diverse mobile clients have differing requirements for communicating and presenting data. When attached to Web servers, the best approach for working with these clients is to provide an easy means of translation and tailoring of data to meet specific client needs, a job that is easily handled by XML and transcoding technologies. Actually by making the legacy data available via XML-based LO, learning systems can greatly extend its reach to its diverse mobile customers. Transcoding technology can circumvent some of the complexities of content adaptation. It adapts content to match constraints and preferences associated with specific environments. It can modify either the content or the rendering associated with an application. In other words, both computer users and cell phone users can view content in a way that suits their devices, without sacrificing the content itself. Thus, transcoding is vital to pervasive computing because it can bridge the gap between existing Internet Web implementation and mobile environment.

There are two major standards currently available for representing multimedia in an XML form with the underlying APIs for their transcoding on mobile devices: the ebXML standard (http://www.ebxml.org) and the World Wide Web Consortium (W3C at http://www.w3c.org) standard. However, the most common standard is the W3C which is based on SVG (Scalable Vector Graphics) standard for representing multimedia. There are three variants of SVG (SVG 1.2, SVG Basic and SVG Tiny) which can be used with resource-limited mobile devices. Unlike other multimedia formats, SVG becomes a powerful tool for anybody managing multimedia content for the Web or other environments (Lee et al 2002). By leveraging the force of XML and the visual strengths of dynamic and easily accessible vector graphics, the Apache XML Project's Batik team extends this power in building an successful APIs that can used for transcoding.

Although SVG and the transcoding APIs solves the problem of representing multimedia that can be used for mobile environments, there is a need to represent Los based on SVG. In this direction the LO must be described using a schema and the resulting LO must have its representative metadata as well as its learning contents (Figure 1).

Page 3: Developing a Collaborative Multimedia mLearning Environment

Figure 1: The structure of the Learning Object.Metadata is just data about data. RDF (Resource Description Framework) can be used to describe metadata. Within RDF there is a mechanism for including different ways of classifying things in the same documents, using XML namespaces. This means that several different ways of classifying the world can be combined. The RDF Schema RDFS allows you to create a schema for the namespace (http://xmlns.com/foaf/0.1/). Moreover, RDF references can be used to annotate the multimedia contents adding more useful interactivity for the learning objects (Mohan et al 1999).

Preparing the Infrastructures for Collaborative Multimedia mLearning:One of the major challenge to mlearning is to provide a mobile networking infrastructure were students /peers can use for their collaborative learning. This infrastructure need to integrate heterogeneous systems into one collaboration system. The infrastructure needs to achieve the following goals (Singhal and Zyda 1999): (1) Different kinds of application endpoints should join/leave in the same collaboration session. (2) Different providers for multipoint multimedia and data collaboration should be connected together to build unified multimedia and data multipoint channels. Especially the first goal requires a common signalling control protocol and event bus, which specifies the message exchange procedure between different types of collaboration endpoints and session servers. As components increasingly are designed to be accessed over the Internet and its mobile devices, it be comes more and more important that component technologies have the openness. For this reason, XML based messaging is emerging as an important open technology. In this direction, there are many systems that uses XML as their media of communication between peers enabling Text Chat, Instant Messenger, and White boards including sharing multimedia resources (e.g. Jaber, NaradaBrokering, JXTA, and JXTA4J2ME). Such systems are based on a distributed publish-subscribe model for coordinating the collaboration and communication between the peers distributed events. However, only JXTA4J2ME provide a pure collaborative mobile environment (Siddiqui 2002). The purpose of JXTA4J2ME is to provide a JXTA compatible functionalities on constrained devices using the Connected Limited Device Configuration (CLDC) and the Mobile Information Device Profile 2.0 (MIDP). Once you downloaded JXTA4J2ME/JXTA and configured your peer node then you will have a lease from the JXTA relay and you are connected to the JXTA network. You can now start sending different types of messages (for example, search messages for files to collaborate). But before considering the actual JXTA messaging, it is important to understand that the format that JXTA4J2ME uses to communicate with the relay is rather very special and can only be used to convey messages between J2ME environments. Thus there is a need to let J2ME mobile environment communicate with a laptop/PocketPC connected on the Web and visa versa. For this purpose one can use JXTA4JMS APIs (Khan 2005) to act as a bridge between the two environments. Figure 2 illustrate the scenarios of communication between the two environments.

Figure 2: Communication between JMS and J2ME Clients.Each J2ME JXTA peer is provided with a lightweight building blocks to communicate with JXTA Network via a JXTA relay. The JXTA relay is used to pass messages among peers and provides off-line storage. The relay stores all of the incoming and outgoing messages, so the resource-constrained JXTA peers on the J2ME

Page 4: Developing a Collaborative Multimedia mLearning Environment

platform don't have to. The relay acts as a proxy to the JXTA network. It stores incoming messages for the mobile peer. The JXTA J2ME peer periodically polls the relay to get its incoming messages (See Figure 3).

Figure 3: The JXTA Relay.

On overall, the use of JXTA4JMS and JMS client simplifies the development of message-based, enterprise applications. This is particularly beneficial with multimedia messaging. However, JXTA network provides a variety of peer-to-peer capabilities. JXTA defines a set of protocols that allow any device on the network to communicate and collaborate. JXTA technology also includes security features not addressed with JMS. JXTA technology provides broader functionality than JMS alone and is suitable for both Enterprise and Internet-based peer-to-peer as the JXTA core can provide peer connection through firewalls, NATs (Network Address Translators) and proxies.

Collaborative Multimedia Learning Objects Based on SVG:One of the ultimate goals for P2P technology is to be able to provide the users with a clear and effective communication environment. To achieve this goal, fast, scalable and rich-content graphics is indispensable. However, the traditional raster graphics such as GIF and JPEG can only meet some of the criteria. They are big, slow, and resolution-fixed. SVG is designed to improve this situation. With this new graphics format, the complex multimedia learning objects contents and highly interactive multimedia animations are all possible for peer collaborations and other applications. The SVG format is based on the XML technology. XML is the universal format for structured documents and data on the Web and regarded as the key to the next generation Web. SVG uses the XML grammar to define the scalable vector-based 2D graphics and can be used as a XML Namespace in the web, therefore it is able to cooperate easily with other XML-based web technologies. Being a XML Namespace, a SVG document is a born XML document, so it can enjoy all advantages the XML documents have. For example, it is able to be easily edited by the plain text editors and dynamically generated by the server side script languages such as PERL and PHP.

SVG uses specific tags to define basic vector graphics objects such as rect, circle, ellipse, polyline, polygon and so on. Complex graphics that cannot be described by the basic shapes are defined as Path. There are two kinds of SVG’s Path, lines and curves, while arcs and Bezier curves are the major means to define curves. SVG uses a “Painter’s Model” for rendering. All the paint operations are processed successively. The first element in the SVG document getting painted first, subsequent element are painted on top of previously painted elements. Graphics elements are blended into the elements already rendered on the canvas using simple alpha composition. With Batik API (http://xml.apache.org/batik) the process of rendering SVG becomes rather very simplified. SVG rendering under BatikBatik utilizes an JSVGCanvas. The JSVGCanvas is a Swing component that can be used to display static or dynamic SVG documents. With the JSVGCanvas, we can easily display SVG documents (e.g. from a URI or a DOM tree) and manipulate it (such as rotating, zooming, panning, selecting text, or activating hyperlinks). Listing 1 illustrates the use of JSVGCanvas within Java environment.

import java.awt.*;import java.awt.event.*;import java.io.*;

Page 5: Developing a Collaborative Multimedia mLearning Environment

import javax.swing.*;import org.apache.batik.swing.JSVGCanvas;import org.apache.batik.swing.gvt.GVTTreeRendererAdapter;import org.apache.batik.swing.gvt.GVTTreeRendererEvent;import org.apache.batik.swing.svg.SVGDocumentLoaderAdapter;import org.apache.batik.swing.svg.SVGDocumentLoaderEvent;import org.apache.batik.swing.svg.GVTTreeBuilderAdapter;import org.apache.batik.swing.svg.GVTTreeBuilderEvent;import org.w3c.dom.DOMImplementation;import org.apache.batik.dom.svg.SVGDOMImplementation;import org.w3c.dom.Document;import org.w3c.dom.Element;public class SVGApplication { public static void main(String[] args) { JFrame f = new JFrame("Batik"); SVGApplication app = new SVGApplication(f); f.getContentPane().add(app.createComponents()); f.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); f.setSize(400, 400); f.setVisible(true); } JFrame frame; JButton button = new JButton("Load..."); JLabel label = new JLabel(); JSVGCanvas svgCanvas = new JSVGCanvas(); public SVGApplication(JFrame f) { frame = f; } public JComponent createComponents() { final JPanel panel = new JPanel(new BorderLayout()); JPanel p = new JPanel(new FlowLayout(FlowLayout.LEFT)); p.add(button); p.add(label); panel.add("North", p); panel.add("Center", svgCanvas); // Set the button action. button.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae) { JFileChooser fc = new JFileChooser("."); int choice = fc.showOpenDialog(panel); if (choice == JFileChooser.APPROVE_OPTION) { File f = fc.getSelectedFile(); try { svgCanvas.setURI(f.toURL().toString()); } catch (IOException ex) { ex.printStackTrace(); } } } }); // Set the JSVGCanvas listeners. svgCanvas.addSVGDocumentLoaderListener(new SVGDocumentLoaderAdapter() { public void documentLoadingStarted(SVGDocumentLoaderEvent e) { label.setText("Document Loading...");

Page 6: Developing a Collaborative Multimedia mLearning Environment

} public void documentLoadingCompleted(SVGDocumentLoaderEvent e) { label.setText("Document Loaded."); } }); svgCanvas.addGVTTreeBuilderListener(new GVTTreeBuilderAdapter() { public void gvtBuildStarted(GVTTreeBuilderEvent e) { label.setText("Build Started..."); } public void gvtBuildCompleted(GVTTreeBuilderEvent e) { label.setText("Build Done."); frame.pack(); } }); svgCanvas.addGVTTreeRendererListener(new GVTTreeRendererAdapter() { public void gvtRenderingPrepare(GVTTreeRendererEvent e) { label.setText("Rendering Started..."); } public void gvtRenderingCompleted(GVTTreeRendererEvent e) { label.setText(""); } }); return panel; }} Listing 1: A Java Program for Rendering SVG using Batik JSVGCanvas.

Besides rendering, SVG supports the ability to change vector graphics over time which an intrinsic feature for collaboration. SVG provides two major animation means: using SVG’s animation elements or using SVG DOM. SVG’s animation elements are developed in collaboration with W3C’s SMIL working group. The animation elements support the time-based modification to the SVG document’s elements. They share the same timing and animation mechanisms with SMIL. The animation defines a mapping of time to values for the target attributes. This mapping accounts for all aspects of timing, as well as animation-specific semantics. Various SVG elements can be animated, such as color value, position, filter parameters etc. The Document Object Model (DOM) is a platform- and language-neutral interface that will allow programs and scripts to dynamically access and update the content, structure and style of documents. SVG offer a set of additional DOM interfaces (ECMAScript and Java bindings) to support efficient animation via scripting. All the SVG elements and their attributes can be accessed by using DOM, so it is less limited and more powerful than the animation elements. Batik provides an implementation of the SVG DOM. As for Batik1.5.1 API, most of the standard DOM features are implemented as defined by the org.apache.batik.dom.svg package. Moreover, SVG graphics can also be highly interactive. SVG supports many UI (user interface) events and pointing events. It provides a quick and effective mechanism to process these events. Moving or clicking the mouse over any graphics elements is able to generate immediate feedback, such as highlighting, text tips, and real-time changes to the surrounding SVG. Animations and scripts executions can also be triggered by this mechanism. With the SVG DOM you are provided with a complete access to all Elements, Attributes and Properties. A rich set of event handlers such as onmouseover and onclick can be assigned to any SVG graphical object. With this interactive feature, SVG is an ideal environment for multimedia collaboration. For such interactivity and collaboration, the DOM listeners can be used and programmed so to respond to the peers commands. In this direction the DOM listeners registered on the SVG document are invoked from the canvas Update Thread. To avoid race conditions, event and listeners programmers must not manipulate the DOM tree from another thread. The way to switch from an external thread to the canvas update thread is to use the following code:

// Returns immediatelycanvas.getUpdateManager().getUpdateRunnableQueue().

Page 7: Developing a Collaborative Multimedia mLearning Environment

invokeLater(new Runnable() { // Insert some actions on the DOM here });…

// Waits until the Runnable is invokedcanvas.getUpdateManager().getUpdateRunnableQueue(). invokeAndWait(new Runnable() { // Insert some actions on the DOM here });

Furthermore, SVG contents can be transcoded from one form of input to another form of output. Using Batik API the trascoding process is very simple based on org.apache.batik.transcoder package. The org.apache.batik.transcoder package defines 5 major classes:

Transcoder - Defines the interface that all transcoders implement. TranscoderInput - Defines the input of a transcoder. TranscoderOutput - Defines the output of a transcoder. TranscodingHints - The TranscodingHints class contains different hints which can be used to control the various options or parameters of a transcoder. ErrorHandler - This class provides a way to get the errors and/or warnings that might occur while transcoding.

However, describing the multimedia contents based on SVG is not enough for describing learning objects. There is a need to describe the learning object metadata. RDF is the W3C standard for describing metadata(www.w3.org/RDF/) . RDF (Resource Description Framework), as its name implies, is a framework for describing and interchanging metadata. It is built on the following rules:

1. A Resource is anything that can have a URI; this includes all the world's Web pages, as well as individual elements of an XML document.

2. A PropertyType is a Resource that has a name and can be used as a property, for example Author or Title

3. A Property is the combination of a Resource, a PropertyType, and a value.

Although RDF is a useful format for cataloguing the learning object according to some standard (e.g. CanCore), it fail short in describing multimedia contents. For this purpose many researchers use the RDF Graph Modeling Language (RGML)( www.cs.rpi.edu/~puninj/RGML/) to describe multimedia components as a graph structure, including semantic information associated with a graph. RGML uses the same tags used by RDF to define graph, node, and edge as RDF classes and attributes of graphs (such as label and weight) as RDF properties. Some of these RDF properties establish relationships between graph, node, and edge instances. RDF Statements about graph elements involve subjects, predicates and objects. Subjects and predicates are RDF Resources, while objects are either RDF Resources or RDF Literals. RGML uses the XML Schema datatypes for RDF Literals. RGML can be easily combined with other RDF vocabularies, for example, to add CanCore Core properties. RGML is very useful in defining the structure and relationships of the components within the multimedia content.

Conclusion:In this article, we introduced a framework that enables mobile peers and other Web based peers to collaborate using multimedia learning objects based on SVG and RDF. The proposed framework requires a special peer-to-peer communication infrastructure and JXTA4J2ME is with the support of Java Messaging API and SVG Batik API. The JMS APIs will enable J2ME Mobile peers to communicate with other peers on the other JXTA Web nodes and visa versa. The SVG Batik APIs will enable multimedia rendering and transcoding on various devices. Figure 4 illustrates an implemented prototype for our framework.

Page 8: Developing a Collaborative Multimedia mLearning Environment

Figure 3: Lakehead University (LU) mLearning JXTA Prototype.

The current implemented prototype of this framework includes capabilities like packaging a LO (i.e. associating RDF metadata with an SVGmultimedia/image) and to search/query the JXTA network for relevant SVG based LOs as well as to render and transcode SVG LOs. Moreover, the prototype includes a textual chatting capability. With such framework learning peers will be able to sharable SVG repository. We are currently involved in modifying the prototype so to add annotation capability, which can make the collaboration and interaction with the SVG LO more exciting. The following scenario illustrates a future vision on using the modified mLearning prototype:

Step 1: The LO sender advertise his LO for sharing and collaboration.Step 2: The receiver of the LO uses the RDF/RGML annotator/editor to produces an annotated LO conforming to his/her ontology and the sender RDF markup metadata.Step 3: The annotator publishes the client ontology (if not already done before) and the mapping rules derived from annotations.Step 4: Other peers can use the JXTA querying part to search for relevant LO for collaboration.

This research is part of an on going research project at Lakehead University to develop a model of mLearning environment.

References:[1] N. Friesen (2004). Three Objections to Learning Objects. In McGreal, R. (Ed.). Online Education Using Learning Objects. London: Routledge/Falmer.[2] F. Khan (2005) Implement JXTA-for-JMS, IBM Research Journal, 22 Feb 2005, http://www-128.ibm.com/developerworks/library/wi-jxta2/?ca=drs-tp0805 [3] S. Lee, et al. (2002) Ubiquitous Access for Collaborative Information System using SVG, SVG Open Conference, Zurich, Switzerland, July 15-17, 2002. [4] R. Mohan, J.R. Smith, and C-S. Li (1999), Adapting Multimedia Internet Content for Universal Access. IEEE Transactions on Multimedia, Vol. 1, No. 1 (03/1999). [5] F. Olsen (2000). The wireless revolution, The Chronicle of Higher Education Journal (Information Technology Section), October, 47(6).  [6] B. Siddiqui (2002) , JXTA4J2ME Implementation Architecture, Developer Journal, www.developer.com/java/j2me/article.php/1501461 [7] S., Singhal, and M. Zyda (1999), Networked Virtual Environments: Design and Implementation. Addison Wesley.[8] K. Wood (2003), An Introduction to Mobile Learning (mLearning). Ferl: Technology for eLearning Journal, March 2003 (http://ferl.becta.org.uk/display.cfm?page=65&catid=192&resid=5194&printable=1)