8
Applications in Virtual Reality Project Brendan John Chinar Patil James Pieszala ABSTRACT In this paper we present an immersive virtual environment that can be shared between multi- ple users. Specifically a game of chess that can be played between two users. To enter this vir- tual environment a user puts on a virtual reality headset, and uses gestures to interact with the scene. This project is a small portion of a term long project in an Applications in Virtual Real- ity course offered at the Rochester Institute of Technology (RIT). 1. INTRODUCTION For this project we have developed a program that allows two users to play chess with each other in a virtual space. By this nature they do not have to be in the same room physi- cally, or even the same city. The program is networked, allowing users to play with anyone that has internet access. The game we chose to build was chess, however with enough time and effort most board games could be designed for this system. As mentioned before, the motivation for this work is a term long virtual reality project. For the project the class is given the challenge of designing a virtual stage where performers, in- struments, and the audience are integrated into one common virtual space. While other stu- dents are working on those other features our group was tasked with designing a more im- mersive virtual reality experience, using a head mounted display (HMD) for 3D viewing and other technologies to interact with the virtual environment. To accomplish this task we chose to implement a chess game that can be played by two players at two separate computers. Each user will have a virtual reality headset to view the application and a hand tracking device to input into the application. 2. RELATED WORKS To design our system we looked into the cur- rently available technologies that could be used. Specifically we researched the Oculus Rift head mounted displays, Microsoft Kinect depth sen- sor, and Leap Motion hand gesture device. 2.1 Interactive Technology First, we looked into using a Microsoft Kinect to put the player into the virtual space. This might be useful for scanning a body part, or tracking body movement, however for something as small as hand and finger movements it might not be accurate enough. Even with the Mi- crosoft Kinect Version 2 few hand gestures can be identified and individual fingers are not nec- essarily tracked. Due to this we turned to the Leap Motion device. As seen in Figure 1 the Leap Motion’s field of view is much lower than that of the Microsoft Kinect, which is recorded to be 0.8 meters to 4.0 meters away at 70 degrees horizontally and 60 degrees vertically. The Leap Motion has

virtual_chess

Embed Size (px)

Citation preview

Applications in Virtual Reality Project

Brendan John Chinar Patil James Pieszala

ABSTRACTIn this paper we present an immersive virtualenvironment that can be shared between multi-ple users. Specifically a game of chess that canbe played between two users. To enter this vir-tual environment a user puts on a virtual realityheadset, and uses gestures to interact with thescene. This project is a small portion of a termlong project in an Applications in Virtual Real-ity course offered at the Rochester Institute ofTechnology (RIT).

1. INTRODUCTIONFor this project we have developed a programthat allows two users to play chess with eachother in a virtual space. By this nature theydo not have to be in the same room physi-cally, or even the same city. The program isnetworked, allowing users to play with anyonethat has internet access. The game we chose tobuild was chess, however with enough time andeffort most board games could be designed forthis system.

As mentioned before, the motivation for thiswork is a term long virtual reality project. Forthe project the class is given the challenge ofdesigning a virtual stage where performers, in-struments, and the audience are integrated intoone common virtual space. While other stu-

dents are working on those other features ourgroup was tasked with designing a more im-mersive virtual reality experience, using a headmounted display (HMD) for 3D viewing andother technologies to interact with the virtualenvironment.

To accomplish this task we chose to implementa chess game that can be played by two playersat two separate computers. Each user will havea virtual reality headset to view the applicationand a hand tracking device to input into theapplication.

2. RELATED WORKSTo design our system we looked into the cur-rently available technologies that could be used.Specifically we researched the Oculus Rift headmounted displays, Microsoft Kinect depth sen-sor, and Leap Motion hand gesture device.

2.1 Interactive TechnologyFirst, we looked into using a Microsoft Kinectto put the player into the virtual space. Thismight be useful for scanning a body part, ortracking body movement, however for somethingas small as hand and finger movements it mightnot be accurate enough. Even with the Mi-crosoft Kinect Version 2 few hand gestures canbe identified and individual fingers are not nec-essarily tracked. Due to this we turned to theLeap Motion device.

As seen in Figure 1 the Leap Motion’s field ofview is much lower than that of the MicrosoftKinect, which is recorded to be 0.8 meters to4.0 meters away at 70 degrees horizontally and60 degrees vertically. The Leap Motion has

Figure 1: Shown is the field of view of a Leap MotionDevice.

range just under 3 feet, with significantly bet-ter fields of view around the device. Becauseit has a smaller range the level of detail in thedepth image is higher. Also, the Leap Motionsucceeds because it is designed specifically forhand tracking. Due to this the Leap MotionSDK provides a detailed 3D hand model thancan be used in our application.

Several papers have been published that look atthe performance of the Leap Motion. For ex-ample, one paper reported that the Leap has aninconsistent sampling rate, and error increasesas you move away from the sensor [4]. The LeapMotion developers have looked for a thresholdto which latency will no longer be noticed, re-porting a value around 30 ms [1]. This valueof course changes by person as it is based onthe users visual and nervous system. Based onthis work most of the latency in the program isfrom the transferring of data through USB con-nection. Overall, the Leap Motion still createsa believable virtual reality experience.

2.2 Virtual RealityNext, we looked into what head mounted dis-plays we could possibly use for the project. Dueto it’s popularity we first looked into the Ocu-lus Rift Dev Kit 1 and 2, and due to it’s cheapprice and versatility looked at the new GoogleCardboard.

The Oculus Rift was a natural choice due to

it’s performance, accessibility, and availabilityof an SDK. The device first came out in 2012and has been improved on ever since. Cur-rently, the Dev Kit 2 is available for purchaseand provides many improvements over the orig-inal including screen resolution, head tracking,and even added a positional tracking compo-nent. With the changes the device has becomemore robust and leads to a much more immer-sive experience.

Studies with the device are mostly centered onlatency and rendering time, as these are majorfactors in believable virtual reality, while alsopreventing motion sickness from using the de-vice. Many have worked on optimizing the headtracking portion of the device, as when the userturns his head, the image needs to update fastenough to accommodate this perceived move-ment in the virtual scene. Specifically, this pa-per cited a latency of around 20 ms being idealfor a good virtual reality experience [6].

Also, optimization must be performed on therendering side of things. While a high resolu-tion screen helps, the virtual scene being ren-dered must have realistic lighting, and a highlevel of detail to be perceived as a real scene.This is possible with the help of many renderingframeworks such as OpenGL and Direct X. Forexample, an application designed to help userswith amblyopia, also known as a lazy eye, hasbeen used to help users to improve stereoscopicvision and potentially correct other errors in vi-sion [2]. This remains to be tested, however theinitial results with the application suggest therendering is realistic enough to make at least atemporary difference in tricking the users visualsystem.

3. SYSTEM DESIGNOur system is designed to utilize 2 HMDs and 2Leap Motion devices to encapsulate 2 users intoan interactive virtual space. To utilize thesetechnologies together an application is built us-ing the Unity game engine. Our application isbuilt solely inside the Unity game editor.

3.1 Hardware Used

Figure 2: Pictured is a mapping of sampling as usedby a ray traced rendering in an Oculus Rift headset.

To create our virtual environment we utilized2 Oculus Rift head mounted display. Currentlyour system has been tested with an Oculus RiftDK1 and an Oculus Rift DK2. However, anycombination of Oculus Rift HMD versions. Themain differences between these two versions isthe resolution of the display, and the trackingof the HMD.

The Oculus Rift DK2 features improved ori-entation tracking, as well as adding positionaltracking to the device. To perform this posi-tional tracking a camera is placed on top ofthe display, or anywhere with view of the head-set. The camera feeds this data into the ap-plication and movements such as leaning for-ward/backward, and turning side to side arereflected in the virtual space. This is a vast im-provement on the Oculus Rift DK1 which fea-tures orientation tracking only, reflecting whichway the user turns their head and nothing else.

Next, to perform hand tracking and gesture in-put the Leap Motion device is used. This de-cision was made due to quality and accuracyof the 3D model produced, and the low latencyof the Leap Motion device. Also, a recentlyreleased attachment for the Oculus Rift allowsthe Leap Motion to be attached to the HMD,and perform overhead hand tracking.

3.2 SoftwareUnity was selected for development due to itsease of use, it features a free development ver-sion, and compatibility with the selected virtual

reality hardware. Unity also features a robustnetworking service, allowing easy set up of aclient based application [3, 5]. The Oculus RiftSDK and Leap Motion SDK are also used tocreate our application.

Unity features a full fledged rigid body physicssystem, 3D scene construction, 3D animation,and various other features needed to build a vir-tual environment. Unity applications are crossplatform, which allows us to create Windowsand Mac versions of the application that caninteract with each other seamlessly. It also pro-vides an IDE for editing code, and debuggingof the application as it is running.

The Oculus Rift features a publicly availableSDK, and a recently released plugin that workswith the free version of Unity. This allows theOculus Rift device to be integrated with an ap-plication by simply replacing the scene camerawith an Oculus Rift camera object. By usingthis plugin the rendering to the Oculus Rift ishandled by Unity, including the barrel distor-tion effect necessary for a full stereoscopic ex-perience. Best of all, this game object is inde-pendent of which version of the Oculus Rift isbeing used allowing for use with different com-binations of Oculus Rift versions.

Next, the Leap Motion SDK is publicly avail-able as with a Unity plugin for the device. TheUnity plugin allows us to access prefabricated3D hand models provided with Leap Motionexample projects. These example projects alsofeature code for pinching objects, hand gestures,and other physics based functions. Scripts fromthese projects were used as a building block forour application.

3.3 NetworkingNext, through the use of Unity networking wedeveloped a simple peer to peer application thatuses 2 designated clients. This allows us to have1 player start the game, and then a player func-tioning as the other client can connect and play.This can be seen in Figure 3.

Information about where the 3D objects in the

Figure 3: Illustrated is the system design. 2 usersfunctioning as clients interact with each other overa network to update game logic, and reflect changesin the other player, and the other player’s pieces.Locally each client handles rendering to the specificOculus Rift, and taking input from the Leap Motiondevices.

scene are is relayed between each instance viaUnity networking, using a function that ob-serves movement of both the chess pieces, andthe 3D models of each player. Network callsare defined that locally update the gameflowand game logic data structures of each client.

4. IMPLEMENTATIONThe game can be seen in Figure 4. The gameis setup such that both skeletons are playing agame of chess against each other in a varying3D environment.

4.1 Leap Motion Interaction4.1.1 Chess Piece PinchingLeap motion device has some inconsistencies inholding on to objects. It happens that the ob-ject slowly slides from the hand and thats notwhat we want as it would ruin player experi-ence. In some cases, if the piece has rigid body,it flies away or does not fall on the correct lo-cation. When there is no rigid body assignedto a piece, it floats. Due to these factors it wasimportant to manipulate the pieces and makethem move where they are supposed to move.

So, in order to hold on to a piece until the playerwants to release it, we implemented a magneticpinch functionality. This was implemented inone of the leap examples which we modified forour purpose. On pinch gesture, the closest pieceto the pinch is accelerated towards the pinchand it stays there until released. For the playerto know which piece would be picked, we changethe color of the piece that would be picked.

Figure 4: Views from each player, whos turn is in-dicated by color of the 2 cylinders at the middle ofthe table.

Figure 5: Pictured is the scene when moving a piece.Illuminated tiles represent what spots are a validmove.

After releasing a piece, it has to move to oneof its valid locations. When a piece is pinched,all of its valid locations are highlighted and theplayer can move to one of those locations. Tofigure out on which location the piece would bedropped on releasing, we change the color ofthat location. This can be seen in Figure 5

4.1.2 3D ModelingModeling for this project focuses on what vir-tual object representations are required to facil-itate the virtual reality interactions and expe-rience. In utilizing Leap Motion to track handmovements it is obvious that some type of vir-

tual hand models will be required. Leap UnityAssets that Leap Motion has made freely avail-able, contains a number of hand models thatare either object rigged or skin rigged in meshmodels. Also contained in the Leap Unity toolsare basic auto rig scripts that interface the LeapAPI with the Unity3D models. These suppliedmodules work on the assumption that when thehands enter the sensors range the models are in-stantiated and correspondingly destroyed whenthe hands leave. This supplied framework issufficient for a static camera view focusing onthe sensor range’s virtual space, as any handsoutside the sensor range cannot be seen anyway.

As this project is utilizing multiple roving views,some type of persistence was desirable to pre-vent the hand models from magically appear-ing and disappearing in a participants cameraview. One solution that was considered was tomount the Leap Motion device directly on thefront of the Oculus Rift; whereby, the handswould always be tracked relative to the userscamera view. This technique however was in-sufficient for our purposes because our applica-tion is mostly focused on forward hand grabsthat would have suffered from occlusion. Thesolution we chose was to instead have a de-fault rest state for the hand models for whenthe physical hands are not being tracked andengage them accordingly when they are. Thissolution also allows us the ability to model apersistent avatar with certain rigged featuresinferred from the hand and arm movements. Inorder to accomplish this, it was immediatelyevident that none of the Leap supplied modelswould be able to be used, as none contain asassociated body avatar. Also the Leap inter-face was reworked to allow for persistence byreanimating static models as opposed to theirdynamic counterparts.

In choosing a suitable model we have restrictedour search to models with more anatomicallycorrect proportions so that they would map wellwith the native Leap rigging. As an obvioussolution we have chosen a skeletal representa-tion, shown in figure 6. In order to utilize the

Figure 6: 3D model of the skeleton

Leap template structure, each bone of the handshown in figure 7, had to be individually cen-tered, scaled and oriented into the correspond-ing Leap default bone settings. Once each boneis registered into a hand template the Leap in-terface animates each bone’s transform by itsnative rigging algorithms. As for the riggingof the rest of the skeleton, our present imple-mentation has no data relating to these physi-cal locations. As a workable solution we’ve as-sumed that the player will be positioned at acertain orientation relative the Leap sensor andattached the upper bone between the trackedlower arm and shoulder accordingly.

4.2 Oculus RiftIntegrating the Oculus Rift is fairly easy usingthe Unity editor. Once the package is down-loaded from the Oculus Rift developer site im-porting it into a project is a menu selectionaway. Once this is loaded into the project youcan add the HMD object into the scene. Onceadded to the scene Unity handles all of thestereoscopic rendering, head tracking, positionaltracking, and display functions. When buildingexecutables Unity automatically builds a ”di-rectToRift” version, and a normal version to beused with the Oculus Rift direct mode and ex-tend mode respectively.

Figure 7: 3D model of the hand

Next, the Oculus Rift package includes a fewsample Unity projects. From these projects aGUI system is implemented, as seen in Figure10. By default the HMD object will respond tothe key inputs of spacebar and R. Spacebar isused to display the HUD, and R is used to resetorientation of the camera. This means that thedirection the HMD is currently facing becomescorrespondent to the direction the HMD objectis facing in the Unity scene.

Environment spheres were added to facilitatescene views for all Oculus camera movement.Figure 8 displays the gesture to change the en-vironment sphere; which is a touching of indexfingers.

4.3 Game DesignSince this is a two player game, there is no ar-tificial intelligence implemented to move piecesas it would have to be in a one player game.To move the pieces to their valid locations weuse cubes, which are illuminated depending onthe piece picked. These cubes are instantiatedbelow the chess board at runtime and are in-visible. Also, the cube closest to the pinchedpiece is highlighted with a different color so theplayer knows where the piece will be droppedon releasing.

Figure 8: Scene Change Gesture: Touch Index Fin-gers

Figure 9: Player Turn Gesture: Cross Hands

One of the player starts the server and the otherjoins in. It is a turn based game, so while oneplayer is making his move, the other will notbe able to move his pieces. Player turns can bechanged either by crossing hands or by pinchingthe cylindrical object to the right of the player.Figure 9 show images in which the player turngesture is preformed.

A player cannot make two moves in the sameturn. If the player wants to change his move,he can undo the previous move by turning bothhands over and then make the new move. Hecan do this as long as it is his turn.

We have also implemented a reset function whichresets the board. All the pieces are moved backto their starting positions. This is done bypressing ’R’. The turn remains with the guywho resets the board.

4.4 NetworkingThe networking model we’ve employed is ba-sically peer-to-peer with a combination of au-thoritative and non-authoritative networking tech-niques. As this is a virtual reality applicationany type of network lag associated with firstperson movements is completely unacceptable;and in this regard, a player’s movements mustbe fully non-authoritative. This project also be-ing an experiment in a shared virtual space, wealso need a certain level of synchronized physicsand game state. Due to network lag, no solu-tion completely allows for both of these criteriato be completely met without undesirable sideeffects. To this end, we’ve chosen a solutionwhere players must take turns when interact-ing with common game state objects (i.e. chesspieces).

Using Unity3D’s built in networking the basicconnection between instances starts in a typeof client-server architecture. Any objects in-stantiated on startup that are to be networksynchronized are owned and controlled by theserver only. In order for a client to obtain theseprivileges for their objects it must first proposea new network identifier and request the serverto re-designate each object as such. After theserver does so, the client is free to apply thesechanges locally and gains an equal footing.

The graphical interface for starting peer-to-peersetup can bee seen in Figure 10. Upon loadingthe game a pop up menu can be summonedwith the spacebar. In this menu the user canpress S, to host a server themselves, or press Hto connect to an open server.

5. TEST CASESAs described in Section 3.2 Unity builds crossplatform applications, meaning both Windowsand Macs can play the game together. This wasone of our test cases, and was validated easily.

Figure 10: Menu for Oculus version used to start aserver, or join a host.

The next logical test case was to test out theUnity networking feature. Originally we hadtested the application using 2 laptops, and theuniversity Wifi connection. Despite the fact theWifi connection is fast we still experienced lag.We hypothesized that this was due to the lap-tops themselves, however it was quickly discov-ered that the Wifi network had caused a major-ity of the latency. The current setup uses twoWindows 8 desktop machines, and has mini-mal amounts of latency. Unfortunately, due toUnity the game representation does not ”catchup” to the other player once latency is expe-rienced. For example, if one client is continu-ously waving has hands around when latencyhits, and stops after 5 seconds, for the next 5seconds his opponent gets a steady 5 secondsof hand waving. This is not ideal, as when la-tency hits the opponent might make a move,which the user will then have to wait for in hisinstance of the game before being able to starthis turn.

6. FUTURE WORKSThis project is merely one possible implemen-tation in an endless sea of creative possibilitiesfor sharing virtual spaces. A couple of proposedextensions on the present design are:

1. Realistic modeling with the continual im-pressiveness of 3D scanned hands, arms,and bodies with appropriate rigging.

2. Merging a body capture technique with theLeap hand capture for full avatar tracking.

3. Adding more functionality to the game logicsuch as check mate and castling.

4. Integrate the application with the chess.comgame servers, allowing the user to play againsteither a bot, or live person through TCP/IP.

5. Google Cardboard represents a novel so-lution to allow users to experience virtualreality without having to commit to a moreexpensive head mounted display. Althoughrendering speeds on phones will undoubt-edly limit the visual complexity of scenes,being wireless has its advantages.

6. Add more games and virtual environments.

7. CONCLUSIONSLastly, this implementation of chess is the firstin a list of possible environments and scenariosfor virtual reality. While this implementation isonly a simple game it provides the experienceof sharing a virtual environment with someonewho may or may not be in the same physicallocation. This has many applications includ-ing work meetings, virtual lectures, social talks,and other scenarios. While this implementationtakes place in virtual reality, it does not holdthe effect that augmented reality would. Beingable to see a virtual representation of your pro-fessor standing in front of a lecture hall wouldhave considerable effect as well. Overall thisenvironment has room to build, and results inthe future look promising due to the advancesin HMD and other interactive technology.

8. REFERENCES[1] R. Bedikian. Understanding latency.

http://blog.leapmotion.com/

understanding-latency-part-1/, 2013.[2] J. Blaha and M. Gupta. Diplopia: A

virtual reality game designed to helpamblyopics. In Virtual Reality (VR), 2014iEEE, pages 163–164. IEEE, 2014.

[3] P. Games. Unity networking tutorial.http:

//www.palladiumgames.net/tutorials/

unity-networking-tutorial/.[4] J. Guna, G. Jakus, M. Pogacnik,

S. Tomazic, and J. Sodnik. An analysis of

the precision and reliability of the leapmotion sensor and its suitability for staticand dynamic tracking. Sensors,14(2):3702–3720, 2014.

[5] T. Jokiniemi et al. Unity networking:Developing a single player game into amultiplayer game. 2014.

[6] S. M. LaValle, A. Yershova, M. Katsev,and M. Antonov. Head tracking for theoculus rift. In Robotics and Automation(ICRA), 2014 IEEE InternationalConference on, pages 187–194. IEEE, 2014.