5
Augmented Reality System based on Sketches for Geometry Education Simona Maria Banu Transilvania University, MIV Imaging Venture Laboratory Dept. of Automatics, Electronics and Computers, Bras ¸ov, Romˆ ania [email protected] Abstract— This paper presents AG3DO (Augmented Ge- ometry 3D Objects) prototype, a tool for generating three- dimensional geometrical objects starting from two-dimensional freehand line drawings. This system is used to assist students in the process of understanding descriptive geometry. By means of Augmented Reality (AR) technology, 3D geometric models reconstructed from sketches (freehand drawings) can be visualized. The input sketches represent two contour projections of the final three-dimensional object. The tests concluded that our prototype improves the spatial ability of the user. I. I NTRODUCTION AND RELATED WORK The costs of IT (Information Technology) equipments are decreasing, so the number of people accessing, querying and interacting with them is increasing. Currently, one tech- nology, namely Augmented Reality (AR), stands out from the crowd by bringing to reality the view of science fiction literature and cinema. The number of equipments needed for developing an AR application is reduced for both desktop PCs (with webcam) and mobile devices (cell phones, tablets and notebooks). Moreover, there is the potential of AR applications in the educational field, due to the following important AR characteristics: AR provides rich contextual learning for individuals learning a new skill; Has the power to engage a learner in ways that have never been possible before; Can provide each student with his/her own unique discovery path; There is no real consequence if mistakes are made during skills training. There are current and increasing efforts in using Aug- mented Reality as an educational tool [1], [2], [3], [4]. We observed that most of these applications tend to be mere virtual versions of real materials, without the worry of using the potential educational tool in learning or even how these materials will be integrated into the educational environment. This represents a challenge in producing in- structional materials that enable teachers to explore the potential of the available AR tools. This article introduces a new prototype, named AG3DO (Augmented Geometry 3D Objects), used for learning de- scriptive geometry in the secondary school. Freehand sketching is considered a universal capacity for visual communication. That is why, in ancient times, people told stories through hieroglyphs, and nowadays, every meet- ing room has a whiteboard used for annotations. Sketching is a natural way to communicate ideas quickly by conveying visual information and also encourages creativity. Three-dimensional modeling is mainly used in design disciplines to help the professional users visualize and un- derstand the prototypes of future real objects. The place of freehand sketching is taken by computerized sketching due to its ability of authoring complex objects. The first application that deals with computerized sketching is SketchPad [5]. Another pioneering sketching system, called SKETCH was introduced by Zeleznik et al. [6]; this system introduced a new form of interaction for manipulating 3D objects. 3D modeling software packages such as AutoCad [7] and Solidworks [8] are two flexible tools for rapid modeling tasks, although the software is complex and requires long training sessions. In the same time RasterVect [9] converts 2D paper drawings using a scanner peripheral, into vector- ized data ready to be interpreted by CAD software. A system that combines two major areas of research: the creation of 3D models from a 2D image and the usage of AR as means for visualization of 3D data, is presented in [10]. They designed a system that creates automatically 3D virtual building models from 2D architectural blueprints. Augmented Reality is used after the process of extrusion to overlay the virtual 3D structure on the real 2D diagram. Descriptive geometry textbooks always include abstract diagrams of geometrical objects to explain the included material to students. In order to better understand the ge- ometry problems, both teachers and students have to sketch the 3D objects on the whiteboard and paper, respectively. Augmented Reality (AR) combined with Computer Vision techniques enable three-dimensional visualization of geomet- rical models as if they were real, solid objects. Our paper started from the work described in [11]. The authors present a framework for authoring three-dimensional virtual scenes containing mechanical systems for AR based on hand sketching. The 3D models were reconstructed from sketches representing isometric projections of the 3D model and interaction was allowed by modifying sketch elements for updating the virtual scene. In this work, we describe a new tool for generating three- dimensional geometric surfaces from two-dimensional line drawings, named AG3DO (See Figure 1). The input is rep- 978-1-4673-1678-1/12/$31.00 ©2012 IEEE 166

[IEEE 2012 International Conference on e-Learning and e-Technologies in Education (ICEEE) - Lodz, Poland (2012.09.24-2012.09.26)] 2012 International Conference on E-Learning and E-Technologies

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 International Conference on e-Learning and e-Technologies in Education (ICEEE) - Lodz, Poland (2012.09.24-2012.09.26)] 2012 International Conference on E-Learning and E-Technologies

Augmented Reality System based on Sketches for Geometry Education

Simona Maria BanuTransilvania University,

MIV Imaging Venture LaboratoryDept. of Automatics, Electronics and Computers,

Brasov, [email protected]

Abstract— This paper presents AG3DO (Augmented Ge-ometry 3D Objects) prototype, a tool for generating three-dimensional geometrical objects starting from two-dimensionalfreehand line drawings. This system is used to assist studentsin the process of understanding descriptive geometry. Bymeans of Augmented Reality (AR) technology, 3D geometricmodels reconstructed from sketches (freehand drawings) can bevisualized. The input sketches represent two contour projectionsof the final three-dimensional object. The tests concluded thatour prototype improves the spatial ability of the user.

I. INTRODUCTION AND RELATED WORK

The costs of IT (Information Technology) equipments aredecreasing, so the number of people accessing, queryingand interacting with them is increasing. Currently, one tech-nology, namely Augmented Reality (AR), stands out fromthe crowd by bringing to reality the view of science fictionliterature and cinema. The number of equipments needed fordeveloping an AR application is reduced for both desktopPCs (with webcam) and mobile devices (cell phones, tabletsand notebooks). Moreover, there is the potential of ARapplications in the educational field, due to the followingimportant AR characteristics:• AR provides rich contextual learning for individuals

learning a new skill;• Has the power to engage a learner in ways that have

never been possible before;• Can provide each student with his/her own unique

discovery path;• There is no real consequence if mistakes are made

during skills training.There are current and increasing efforts in using Aug-

mented Reality as an educational tool [1], [2], [3], [4].We observed that most of these applications tend to be

mere virtual versions of real materials, without the worryof using the potential educational tool in learning or evenhow these materials will be integrated into the educationalenvironment. This represents a challenge in producing in-structional materials that enable teachers to explore thepotential of the available AR tools.

This article introduces a new prototype, named AG3DO(Augmented Geometry 3D Objects), used for learning de-scriptive geometry in the secondary school.

Freehand sketching is considered a universal capacity forvisual communication. That is why, in ancient times, people

told stories through hieroglyphs, and nowadays, every meet-ing room has a whiteboard used for annotations. Sketchingis a natural way to communicate ideas quickly by conveyingvisual information and also encourages creativity.

Three-dimensional modeling is mainly used in designdisciplines to help the professional users visualize and un-derstand the prototypes of future real objects. The place offreehand sketching is taken by computerized sketching due toits ability of authoring complex objects. The first applicationthat deals with computerized sketching is SketchPad [5].Another pioneering sketching system, called SKETCH wasintroduced by Zeleznik et al. [6]; this system introduced anew form of interaction for manipulating 3D objects.

3D modeling software packages such as AutoCad [7] andSolidworks [8] are two flexible tools for rapid modelingtasks, although the software is complex and requires longtraining sessions. In the same time RasterVect [9] converts2D paper drawings using a scanner peripheral, into vector-ized data ready to be interpreted by CAD software.

A system that combines two major areas of research: thecreation of 3D models from a 2D image and the usageof AR as means for visualization of 3D data, is presentedin [10]. They designed a system that creates automatically3D virtual building models from 2D architectural blueprints.Augmented Reality is used after the process of extrusion tooverlay the virtual 3D structure on the real 2D diagram.

Descriptive geometry textbooks always include abstractdiagrams of geometrical objects to explain the includedmaterial to students. In order to better understand the ge-ometry problems, both teachers and students have to sketchthe 3D objects on the whiteboard and paper, respectively.Augmented Reality (AR) combined with Computer Visiontechniques enable three-dimensional visualization of geomet-rical models as if they were real, solid objects.

Our paper started from the work described in [11]. Theauthors present a framework for authoring three-dimensionalvirtual scenes containing mechanical systems for AR basedon hand sketching. The 3D models were reconstructed fromsketches representing isometric projections of the 3D modeland interaction was allowed by modifying sketch elementsfor updating the virtual scene.

In this work, we describe a new tool for generating three-dimensional geometric surfaces from two-dimensional linedrawings, named AG3DO (See Figure 1). The input is rep-

978-1-4673-1678-1/12/$31.00 ©2012 IEEE 166

Page 2: [IEEE 2012 International Conference on e-Learning and e-Technologies in Education (ICEEE) - Lodz, Poland (2012.09.24-2012.09.26)] 2012 International Conference on E-Learning and E-Technologies

resented by two orthogonal perspectives of a 3D geometricalobject. These 2D projections are obtained from freehandsketches of the 3D object that the user has previouslydrawn on the computer. After reconstruction, the virtualthree-dimensional object is superimposed onto the real scenethrough Augmented Reality. Since Head Mounted Displayscan be cumbersome, the visualization is done on a computerscreen.

Fig. 1. A 3D prism reconstructed from bottom (first image) and front(middle image) perspectives previously drawn on the computer. The 3Dobject is superimposed over the recognized glyph.

In the following, we describe the technical part behindthe AG3DO system. Section III presents the experimentalresults and some reactions from the users related to theproposed prototype. Section IV is represented by a compar-ison between a selected AR educational application and ourprototype. In the last section are exhibited the conclusionsand future work.

II. SYSTEM OVERVIEW

AG3DO system is a three dimensional geometric construc-tion tool designed for three-dimensional geometrical objectsvisualization. It contains three main modules:

1) 3D Reconstruction module;2) 3D Rendering module;3) Augmented Reality module.The input to our algorithm is represented by two contour

freehand sketches. These drawings represent two of theprojections of a geometrical shape, more exact, the bottomand the front projections. The 3D Reconstruction module hastwo main tasks:

1) Detection of key points from each freehand sketch;2) Generation of all two-dimensional planes forming the

3D object;After reconstruction the 3D model is rendered onto the

computer screen and finally, superimposed onto the realscene using Augmented Reality technology. In the following,details about AG3DO system are presented.

A. 3D Reconstruction

The first step in our application is to detect the key pointsin the two freehand drawings, corresponding to the bottomplane and the front plane, given by the user. In order todo this, we use the Good Features to Track detector [12]implemented in OpenCV library. This type of detector findsthe most prominent corners in the image or in the specifiedimage region.

Fig. 2. Detection step. Upper: Bottom projection with the correspondingdetected key points. Lower: Front projection with the corresponding detectedkey points.

Figure 2 depicts the results of the detection step.After detecting the key points in each of the two pro-

jections we need to sort them before reconstructing thedesired geometrical object. For this we use the convexHull()function implemented in OpenCV. This function finds theconvex hull of a 2D point set using the Sklansky’s algorithm[13] that has O(N logN) complexity in the OpenCV currentimplementation.

In order to make a 3D reconstruction of the geometricalobject we have to generate different planes from the detectedpoints for each different drawing. For the first two planesgiven by the user we have to bring the front plane to a 90degrees angle relative to the bottom plane and to align theirbottom points so they connect. This is done by applying a 90degrees rotation matrix along the X axis to the points fromthe front plane. The resulted matrix is then multiplied witha translation matrix along the Y axis. Finally, the resultedtransformation matrix is multiplied with a correspondingscaling matrix. Then all the other planes are generated usingthe same concept described above. Figure 3 describes theresulted 3D object after 3D reconstruction.

Fig. 3. The reconstructed 3D hexagon.

B. 3D Rendering

To be able to visualize the 3D object we use the XNAframework. This framework has the advantage of being

978-1-4673-1678-1/12/$31.00 ©2012 IEEE 167

Page 3: [IEEE 2012 International Conference on e-Learning and e-Technologies in Education (ICEEE) - Lodz, Poland (2012.09.24-2012.09.26)] 2012 International Conference on E-Learning and E-Technologies

Fig. 4. The Geometry Pipeline

available on all Windows devices including Windows phones.However, it cannot render the planes that we defined pre-viously. We have to decompose each plane in a set oftriangles and only then send them to the XNA video bufferfor rendering. To do that we chose the following approach:

1) Identify the Center of Mass (CM) of each plane byconsidering each vertex (V ) equal in weight;

2) Using the identified CM we define the triangles asshown in Figure 5.

Fig. 5. Decomposition of a plane in triangles for XNA rendering.

where CM = ∑Vi|i=1:6/6.In order to create the final 3D models for our system we

use Triangle Strips. A Triangle Strip is a list of vertices thatcreates a series of triangles connected to one another. Thisis the most used method when dealing with 3D graphics.

After reconstructing the 3D geometrical model from itscorresponding planes, it has to be translated into a flat imageto be able to appear on the screen properly. This is donethrough the Geometry Pipeline (see Figure 4) composed ofthree transformations: World Transformation, View Transfor-mation and Projection Transformation. It is called a pipelinebecause vertices are put through each step one at a time,and at each step along the ”pipe”, the vertex is renderedinto its flat image form. All objects from the scene must gothrough this pipeline which is composed from a sequenceof actions, called transformations. There are three types oftransformations:• World Transformation: sorting out the new positions

of all models in relation to each other. It changesthe coordinates from model space to world space byrotating, translating and scaling the 3D model.

• View Transformation: setting up the 3D coordinatesystem to point in the proper direction. There is used theconcept of a virtual camera that has an exact positionand points at an exact vector.

• Projection Transformation: converting the 3D modelsinto 2D images by converting the 3D coordinates intoscreen coordinates.

Once the rendering step is completed we can move onto the next step represented by the augmentation of the 3Dreconstructed object onto the real scene using AugmentedReality technology.

C. Augmented Reality Technology

After reconstructing the 3D object, what remains isto augment it in the real scene. For this step we haveused GRATF (Glyph Recognition And Tracking Framework)which provides a library for localization, recognition andpose estimation of optical glyphs in still images and videofiles [14]. Glyph recognition and pose estimation library isan extension to AForge.NET framework. The optical glyphrecognition is applied mostly in Augmented Reality projectsand robotics applications.

In Figure 6 there are a few snapshots from AG3DOsimulation depicting the 3D virtual geometrical object su-perimposed over the recognized glyph.

Fig. 6. The glyph used for recognition (first image). Various positions ofthe augmented 3D reconstructed hexagon.

III. EXPERIMENTAL RESULTS

In this section, we demonstrate the effectiveness of ourAG3DO reconstruction and visualization system on variouspairs of freehand line drawings (see Figure 7). The snapshotsof the reconstructed 3D objects were taken in the virtualworld. The virtual world was created by choosing not torender the real scene, after the recognition of the glyph, thuswe were able to manipulate the virtual object by moving theglyph in front of the camera.

978-1-4673-1678-1/12/$31.00 ©2012 IEEE 168

Page 4: [IEEE 2012 International Conference on e-Learning and e-Technologies in Education (ICEEE) - Lodz, Poland (2012.09.24-2012.09.26)] 2012 International Conference on E-Learning and E-Technologies

Fig. 7. Various pairs of freehand line drawings and their correspondingvirtual reconstructed geometrical objects (a cube, a pentagon and anoctagon). Left: bottom projection. Right: front projection.

A. Reactions

Based on the functionality described so far, we performedan informal user study with 20 students. The goal of thestudy was to see how the participants interact with AG3DOAugmented Reality system. They were asked to draw thebottom and the front perspective of a desired geometricalmodel (with the specification that the base projection mustbe a square or a rectangle). After that we gave them twooptions:

1) To see the virtual object in a virtual world;2) To watch the model augmented onto the real scene.In both cases we ran the simulation and let the participants

to make an impression about it. They were satisfied with the

graphics used in the application.Half of the students were impressed by the augmentation

of the 3D object over the recognized glyph. The other halfconsidered the visualization of the 3D geometrical object ina virtual world better as compared to the augmentation ofthe object into the real scene. The latter felt they had morecontrol over the virtual object and they could examine itbetter, not being distracted by the glyph. The only differencebetween the virtual and the augmented worlds was that thereal scene was intentionally not rendered on the computerscreen when choosing the visualization in a virtual world,thus the object could be manipulated by moving the markerin front of the camera like in the augmented world scenario.

Overall we had a good response for our prototype, thus itencourages us to improve it in the future.

IV. COMPARISON BETWEEN AG3DO AND [11]

Because we were inspired by Bergig et. al.’s work [11] it isnatural to also perform a comparison between our prototypeand their framework. Our system is not finished, therefore,the comparison will be limited to only the common elementsof both developments.

Bergig et. al. introduced the ability to author mechanicalsystems in 3D by hand sketching using normal pencil andpaper. The mechanical system can be composed from usergenerated sketches and also from textbook diagrams. Theuser can use the camera to manipulate models, position,scale, and rotate them, as well as modify the model sketchto apply structural changes on the 3D model.

The first difference between their system and our prototypeis that we chose to use two contour sketches to generate thefinal 3D model, instead of a single sketch or diagram. Thischoice was made to ease the user’s work in drawing a 3Dsketch, the user can now use only 2D simple sketches. Also,by using these 2D contours (bottom and front planes) thecomputational time decreases compared to Bergig et. al.’swork where the animation is not performed in real time.

As in the system we perform the comparison with, ourprototype is capable of manipulating the generated modelsby modifying their position, scale and by rotating them.

The novelty of our prototype resides in the differencesin concept between the proposed system and other similarapproaches, [11] being one such approach.

V. CONCLUSION

In this paper we presented a fully functional AR appli-cation that is suitable for 3D objects visualization. AG3DOuses sketches as means for communications. We have shownhow to reconstruct a 3D object from two 2D line drawingsrepresenting two views of the desired object.

We have demonstrated that sketch interpretation and 3Dgeometrical reconstruction techniques combined with Aug-mented Reality is an interactive way to assist learningand improves the spatial ability of the user. This abilitycan encourage students to experiment with mathematicalsimulations and learn about geometry in a non-conventionalway.

978-1-4673-1678-1/12/$31.00 ©2012 IEEE 169

Page 5: [IEEE 2012 International Conference on e-Learning and e-Technologies in Education (ICEEE) - Lodz, Poland (2012.09.24-2012.09.26)] 2012 International Conference on E-Learning and E-Technologies

One limitation of our work is that it requires for the baseorthographic projection to be a specific quadrilateral (eithersquare, either rectangle).

Our algorithm can be extended in order to accept or-thographic projections that have different shapes (triangles,polygons, conics, etc.). Even though the front projection canbe any type of polygon, the base one cannot. This is alimitation that will be overcome in our future work. Also,we intend to use more than two views of a three-dimensionalobject to reconstruct different shapes, other than geometricalones, thus extending the application of our system to otherresearch areas.

ACKNOWLEDGMENT

This paper is supported by the Sectoral OperationalProgramme Human Resources Development (SOP HRD),ID76945 financed from the European Social Fund and bythe Romanian Government.

REFERENCES

[1] H. Kaufmann, D. Schmalstieg, and M. Wagner, “Construct3d: Avirtual reality application for mathematics and geometry education,”Education and Information Technologies, vol. 5, pp. 263–276, 2000.

[2] Y.-C. Chen, “A study of comparing the use of augmented reality andphysical models in chemistry education,” in Proceedings of the 2006ACM international conference on Virtual reality continuum and itsapplications, ser. VRCIA ’06, 2006, pp. 369–372.

[3] R. Freitas and P. Campos, “Smart: a system of augmented realityfor teaching 2nd grade students,” in Proceedings of the 22nd BritishHCI Group Annual Conference on People and Computers: Culture,Creativity, Interaction - Volume 2, ser. BCS-HCI ’08, 2008, pp. 27–30.

[4] A. K. Sin and H. Badioze Zaman, “Tangible interaction in learningastronomy through augmented reality book-based educational tool,”in Proceedings of the 1st International Visual Informatics Conferenceon Visual Informatics: Bridging Research and Practice, ser. IVIC ’09,2009, pp. 302–313.

[5] I. E. Sutherland, “Sketch pad a man-machine graphical communicationsystem,” in Proceedings of the AFIPS conference, vol. 23, 1963.

[6] R. C. Zeleznik, K. P. Herndon, and J. F. Hughes, “Sketch: Aninterface for sketching 3d scenes,” in Proceedings of SIGGRAPH 96,ser. Computer Graphics Proceedings, Annual Conference Series, Aug.1996, pp. 163–170.

[7] “AutoCAD,” http://usa.autodesk.com/autocad/.[8] “Solidworks,” http://www.solidworks.com.[9] “RasterVect a raster to vector conversion program (vectorizer).”

http://www.rastervect.com/.[10] R. Clifford, A. Clark, and M. Rogozin, “Using Augmented Reality

for Rapid Prototyping and Collaborative Design to Model 3D Build-ings,” in 12th Annual SIGCHI-NZ Conference on Human-ComputerInteraction (CHINZ 2011). ACM, 2011, pp. 117–120.

[11] O. Bergig, N. Hagbi, J. El-Sana, and M. Billinghurst, “In-Place 3DSketching for Authoring and Augmenting Mechanical Systems,” in8th IEEE International Symposium on Mixed and Augmented Reality(ISMAR 2009), 2009, pp. 87–94.

[12] J. Shi and C. Tomasi, “Good features to track,” in IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR94), 1994, pp.593–600.

[13] J. Sklansky, “Finding the convex hull of a simple polygon,” PRL,vol. 1, pp. 79–83, 1982.

[14] “GRATF (Glyph Recognition And Tracking Framework),”http://www.aforgenet.com/projects/gratf/.

978-1-4673-1678-1/12/$31.00 ©2012 IEEE 170