Report Cn4

Embed Size (px)

Citation preview

  • 8/16/2019 Report Cn4

    1/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 1

     

    Project Report 

    Google’s Street View

    With

    Oculus Rift 

    Group Members 

    Daniel P. Dye (5322792797) 

    Thiraphong Chawla (5322793555) 

    Advisors: Dr. Cholwich Nattee 

    Dr. Nirattaya Khamsemanan 

    School of Information, Computer and Communication Technology, 

    Sirindhorn International Institute of Technology, 

    Thammasat University 

    Semester 2, Academic Year 2013 

    Date 

    March 10, 2014 

  • 8/16/2019 Report Cn4

    2/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 2

     

    Table of Contents 

    1. Introduction ............................................................................................................................ 5 

    2. Background ............................................................................................................................ 7 

    2.1. Oculus Rift ...................................................................................................................... 7 

    2.2. Unity ................................................................................................................................ 8 

    2.3. Google Street View ......................................................................................................... 9 

    2.4. Processing...................................................................................................................... 10 

    3. Motivation ............................................................................................................................ 11 

    4. Objectives ............................................................................................................................. 11 

    5. Outputs and Expected Benefits ............................................................................................ 12 

    5.1. Outputs .......................................................................................................................... 12 

    5.2. Benefits .......................................................................................................................... 12 

    6. Literature Review ................................................................................................................. 13 

    6.1. Google Maps with Oculus Rift and Leap Motion ......................................................... 13 

    6.2. Oculus Street View ........................................................................................................ 14 

    6.3. Oculus Rift and NASA’s virtual reality of Mars........................................................... 15 

    7. Methodology ........................................................................................................................ 16 

    7.1. Approach ....................................................................................................................... 16 

    7.1.1. Overview ................................................................................................................ 16 

    7.1.2. Obstacles ................................................................................................................ 19 

    7.2. Tools and Techniques .................................................................................................... 20 

    7.2.1. Tools ....................................................................................................................... 20 

    7.2.2. Techniques ............................................................................................................. 21 

    7.3. Technical Specifications ............................................................................................... 22 

    7.3.1. Oculus Rift ............................................................................................................. 22 

    7.3.2. Google Street View ................................................................................................ 23 

    7.3.3. Google Geocoding .................................................................................................. 23 

    8. Project Schedule ................................................................................................................... 24 

    9. Project Progress .................................................................................................................... 25 

    9.1. Research and Understanding ......................................................................................... 25 

    9.2. Obstacles and Solutions ................................................................................................ 30 

    9.3. Completion Steps .......................................................................................................... 33 

    10. Technical Description ........................................................................................................ 36 

    10.1. Overview ..................................................................................................................... 36 

    10.2. Implementation ............................................................................................................ 36 

    10.2.1. Unity ..................................................................................................................... 36 

  • 8/16/2019 Report Cn4

    3/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 3

     

    10.2.1.1. Spherical Branch .......................................................................................... 37 

    10.2.1.2. Flat Panorama Branch .................................................................................. 39 

    10.2.2. Processing ............................................................................................................. 40 

    10.2.2.1. Retrieve and display images from Street View API .................................... 40 

    10.2.2.2. Keyboard input look-around ........................................................................ 41 

    10.2.2.3. Create location search functionality ............................................................ 41 

    10.2.2.4. HMD Calibration ......................................................................................... 42 

    10.2.2.5 Shader ........................................................................................................... 42 

    10.2.2.6. Retrieving sensor values and create look-around functionality ................... 43 

    10.2.2.7. Calibrate eye distance or inter-pupillary distance (IPD) ............................. 44 

    10.2.2.8. Movement trigger and its functionality ....................................................... 44 

    10.2.2.9. Overlays ....................................................................................................... 44 

    10.3. Interface ....................................................................................................................... 46 

    10.3.1. Unity ..................................................................................................................... 46 

    10.3.2. Processing ............................................................................................................. 46 

    11. References .......................................................................................................................... 50 

  • 8/16/2019 Report Cn4

    4/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 4

     

    Statement of Contribution 

    By submitting this document, all students in the group agree that their contribution in the

     project so far, including the preparation of this document, is as follows:

    Daniel P. Dye (5322792797) ………………………………………50%

    Thiraphong Chawla (5322793555) ………………………………………50%

  • 8/16/2019 Report Cn4

    5/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 5

     

    1. Introduction

    We all have dreamt about visiting places around the world but most of the time this is not

     possible because of many reasons such as family, money, etc. and so we have decided to

    make it at least possible for you to visit these places in a virtual reality with the help of a

    device known as Oculus Rift. This virtual reality will be a splitting image of the physical

    reality which we live in. This can be achieved by the use of services provided by Google.

    The project that we planned to achieve was to create a software system that uses only the

    Oculus Rift to maneuver around in the Google’s Street View’s virtual world. 

    Google provides services known as Google Maps and Google Earth that maps the streets and

    locations of places all around the world. These services have been integrated with a feature

    known as Google Street View [1] which shows the panoramic view of a location. The

     panoramic view of a location allows users to navigate 360 degrees horizontally and 290

    degrees vertically. The panoramic view of a location is due to the combinations of many

     panoramic images of the street [2].

    Oculus Rift is a virtual reality headset that allows your head movement to interact with videogames [3]. The Oculus Rift provides rotation input from 3 axes; x, y and z. The Oculus Rift

    also provides an almost human-like field of view on a high resolution display seen through 2

    lenses in the headset. The extremely low latency of the device provides input and output

    synchronization at rates that appear to be close to actual head movements, in which we

    navigate our panes of vision.

    We first decided to use Unity Engine to create an application for this project because Oculus

    Rift has a software development kit (SDK) for Unity Engine but due to the limitations of

    textures in Unity Engine we decided to migrate our project to Processing [4] which has

    comparatively much fewer limitations. The limitations of Unity Engine will be further

    described in details in the Progress (section 9) and Technical Description (section 10)

    sections.

    We planned to create an application that allows users to look around as they would with a

    mouse, but in order to make it convenient, this system would use Oculus Rift to look around

    and move with simple and natural head movements. Since the head movement is

    synchronized with the display, it will be very effective to use Oculus Rift to view the

     panoramic images displayed by the Google Street View. The main focus of this project is to

    allow not only look around but also move around with the Oculus Rift in the Google Street

    View. These head movements will be further researched and experimented with to locate the

    most comfortable way to move.

    We have also decided to make the system able to overlay animated images at certain

    geographical locations and therefore make this not only a new idea, but also an improvement

    towards existing software. These dynamic overlays will allow animated objects to be

    displayed at pre-specified locations. The overlays may display explanations and descriptions

    towards a location or display short animation loops of cultural activities. It could give a brief

    explanation about the location or give details about the culture or traditions performed at these

     places. The benefit of having these dynamic overlays will make it more interesting for people

    to view rather than just having static images to look at.

    When this project becomes complete it will be a portable and extremely cheaper way for

     people to see the sights of the world. People with disabilities will be able to use the system to

    go to places they never thought would be possible.

    Section 2 provides the background of our topic and related technologies. Section 3 explains

    our motivation towards working on this project. Section 4 states our aims and objectives of

  • 8/16/2019 Report Cn4

    6/51

  • 8/16/2019 Report Cn4

    7/51

  • 8/16/2019 Report Cn4

    8/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 8

     

    Oculus Rift team has been developing their device with many of the great minds in the

    Gaming industry including David Helgason, CEO of Unity, and Gabe Newell, President and

    Owner of Valve.

    We are using the Oculus Rift as our virtual reality interface because of its functionality and the

    specifications available. The Oculus Rift can be used to view the panoramic images retrieved

    from Google Street View with ease. The functionality of the Oculus Rift is beyond theimaginable. Although the Oculus Rift was designed to focus on video gaming, it is still a very

    effective device for this purpose.

    Currently the Oculus Rift is available for development purposes, but they hope to release their

    consumer version with more impressive specifications. They have developed an SDK to

     provide easy integration and development. Their SDK currently works with 2 game

    development engines: Unity and Unreal Engine.

    Although we had decided that we would be using Unity engine to develop our system, we

    changed our minds due to the limitations of Unity engine. We have decided to try out other

    development environments, such as Processing.

    2.2. Unity

    Figure 2: Unity3d

    Unity is a 3D game development engine [8]. It provides exceptionally powerful rendering

    with multiple tools to assist with development. It allows you to publish your creation on

    multiple platforms without any hassle. It provides an easy method towards controls and game

    rules using scripts embedded into “Game Objects”. It also has a vast community and plentiful

    “Assets” for use in their Asset Store. 

    We are using the Unity, over Unreal Engine, for development due to us having a brief period

    of trial and error with Unity. We have learned some of the basic controls and functions of the

    engine and intend to expand our knowledge to develop our system.

    The system that we will be developing is computer software that will be used for displaying

    images on the Oculus Rift device therefore the Unity engine will be a very powerful tool to

  • 8/16/2019 Report Cn4

    9/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 9

     

    use for the development of this project. It is designed to be used with a three dimensional

    environment, so it will be very convenient for us to use this engine to create a virtual reality

    from the Google Street View. The software development kit, provided by the Oculus Rift

    development team to be used with the Unity engine, makes it possible to configure the display

    to be viewed on the Oculus Rift device.

    Since we have decided to implement the animated images as an overlay on the panoramicimages of the Google Street View, the Unity engine can make it possible because it is a

    system that is specifically designed for game development and it is possible to develop a

    system that supports the displaying of the animated overlays.

    2.3. Google Street View

    Figure 3: Google Street View on the new Google Maps

    The Google Street View enables users to explore streets, landmarks, and many other locations

    in a view that acts as a three dimensional environment. This is possible due to the panning of

    the image taken in a panoramic view. Google has made the Google Street View’s API

    available, so we are going to use it to pull data from Google to display images from all around

    the world.

  • 8/16/2019 Report Cn4

    10/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 10

     

    2.4. Processing

    Figure 4: Processing

    Processing is a programming language and integrated development environment which was

    initially designed to help teach programming through visual context but it was later developed

    to a powerful development tool. It was design to help non-programmers get started with

     programming. It builds on Java language but with much simpler syntax. Processing provides a

    sketchbook that is derived from the PApplet, a java class that implements most of the

    Processing’s features, for organizing projects. Processing is open  source and works across

    multiple platforms such as GNU/Linux, Windows, and Mac OS X.

  • 8/16/2019 Report Cn4

    11/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 11

     

    3. Motivation

    The motivation to work on this project is to work with the virtual reality environment. The

    capabilities and extents of working with the Oculus are huge. We decided to work with the

    Google Street View because we wanted to create a virtual reality of our real physical world

    without having to 3D sculpture everything.

    This software will also allow people to visit places without the difficulties of travelling with

    their conditions. This is a step towards enabling them with alternative sightseeing plans. It is a

     positive advancement towards cheaper and easier approaches of seeing the world.

    We believe this project is plausible for us as we have a lot of a programming background and

    have dealt with many technical obstacles. We are both Computer Science students surrounded

     by multiple great minded Professors. We also have experience with other controllers, such as

    the Leap Motion [9] and Microsoft Kinect [10]. The concept of using a device to control the

    virtual reality that is a replica of our physical world is fascinating. It is quite intriguing that we

    can use this software to explore the world we live in with just simple head movements.

    4. Objectives

    The aim of this project is to create a system that uses will allow us to view our physical reality

    in a form of virtual reality using the Oculus Rift. The virtual reality will be integration of

    Google’s Street View and the animated images. 

    In order to achieve this aim, there are 3 objectives:

    1.  Create a system for displaying a virtual reality on Oculus Rift using Oculus Rift SDK.

    2.  Integrate Oculus Rift with Google Street View on the system using the Google Street

    View API.

    3.  Implement the dynamic overlays for animated images.

  • 8/16/2019 Report Cn4

    12/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 12

     

    5. Outputs and Expected Benefits

    5.1. Outputs

    The output of this project is a cross-platform application. This application is a program that

    will allow a person to use Oculus Rift to see the Google’s Street View as a virtual reality. The

    application will also recognize the head movement and load images as well as the animated

    overlays accordingly.

    There have been experimental results, and their reports, which specifies the steps and

     procedures as well as the acceptable outcome based on the experiments.

    5.2. Benefits

    The benefit of this project is that it will bring us one step closer to what many people have

     been dreaming about i.e., the experience of life in the virtual reality.

    Many people want to go to places around the world but to actually visit those places can be

    quite expensive and so the main benefit of this system is that it will allow users to experiencethem as they are, visually, in real life. Visualizing things can make a person feel the

    atmosphere of the location. This software with its implementation of Oculus Rift with

    Google’s Street View will allow us to see the world with our very own two eyes without

    having to physically go to those places.

    In a short term benefit regarding the development of software for any virtual reality, this

     program will be useful to the people wanting to develop software that does not use any other

    hardware except for the Oculus Rift. It will encourage people to enter the world of virtual

    reality. The virtual reality provided by our project would not be a simple one but rather a

    much more advanced one with the availability of dynamic overlays which could help a lot of

    other development projects trying to achieve overlays in a 3D environment.In a long term benefit, this software may be the base of the integration of the virtual reality

    with our physical reality. As we have seen in many movies that there are many technologies

    which allows a person to look through other people’s eyes, this technology could also be

     possible with the help of this program. It can be the recipient of the transmitted information

    from the source which could be a server or a person’s eye. It could also be integrated with the

    social networking sites to allow people to upload videos of certain events at certain locations

    and share them with everyone around the world with the help of the system that would allow

    addition of these videos.

  • 8/16/2019 Report Cn4

    13/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 13

     

    6. Literature Review

    We have come across various documents, videos, and websites related to this topic. Here are

    some of the projects that have either been completed or are in progress:

    6.1. Google Maps with Oculus Rift and Leap Motion

    Figure 5: First photo of Google Maps with Oculus Rift and Leap Motion

    Google has previewed its integration of Oculus Rift and Leap Motion with its own new

    Google Maps during the Google IO 2013 event [11] suggesting that it will be supporting the

    Oculus Rift but it also requires Leap Motion to send information in order to navigate.

    The integration of Google Maps with Oculus Rift and Leap Motion is done with Google

    Chrome as a mediator. Since Google Maps already has an API for Google Chrome, they usedthis along with the APIs for Oculus Rift and Leap Motion for Google Chrome in order to

    make it completely functional.

    Although Google has not made it official but the appearance of this particular integration of

    Oculus Rift and Leap Motion with the Google Street View suggests the idea of making reality

    into a virtual representation. This may be common in the near future.

    The advantages and disadvantages of this particular system may not be easily described as

    they did not release any information other than previewing it. But it is possible for us to

    mention something which this particular system does not have and is available in our system.

    For example, Oculus Rift as the only controller. We also have an animated layer on our

    system which will make an exciting integration.

  • 8/16/2019 Report Cn4

    14/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 14

     

    6.2. Oculus Street View

    Figure 6: Oculus Street View by toffmo5

    Oculus Street View [12] is a Google Street View viewer for the Oculus Rift developed by a

    developer with a username of “troffmo5” [13]. This developer has made a website that

    supports the display for the Oculus Rift.

    When viewed in full screen mode, it works like a system that was made by any other engine

    that supports oculus rift but it also requires a rift server [14] to be installed and running on a

    windows machine in order to support the head movement of the oculus rift. This allows

    anyone from around the world, with an Oculus Rift device, to explore by searching the

    location with the help of the mini-map provided and then select a certain place where Google

    Street View is available.

    The system supports the head movement from the Oculus Rift when the server is running but

    otherwise it can also be navigated, i.e., look around, at a fixed point by the movement of a

    mouse or by pressing arrow keys. It also supports gamepads (only on chrome) which are

    analog sticks used to look around. There is certain key-press or mouse-click that brings up the

     box to search for new areas.

    As it is an open source project, the programming codes written in JavaScript are made public

     by the owner of this project; it may also assist us towards the basic integration of Oculus Rift

    and Google Street View.

    The benefits of this system may seem to be plenty, but in reality, there are no other benefits to

    this system except for the fact that it is the integration of Google Street View and Oculus Rift

    and it is available for free on the internet. This system has a major disadvantage. It did not

    utilize the full functionality provided by the Google Street View. Google Street View allows

    us to navigate around using mouse click and also allow zoom in and out at a pointed place.

    This system allows one to bring up the search box to search for new location rather than

    allowing the navigation to the nearby area of the previously searched area.

  • 8/16/2019 Report Cn4

    15/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 15

     

    The major limitation to this system is also the fact that the head movement of the Oculus Rift

    can only be used on a windows operating system. The server which allows this to happen can

    also cause delays to navigation with respect to the head movement. This problem may be

    caused by the internet connectivity.

    The system that we are proposing will not only allow us to look but also move around. This

    can have a major impact on the feeling that one may get while navigating. It will bring theusability of a user to a whole new level. This usability is also supported by the pre-cache

    system which we will design to avoid the problem that occurs in the Oculus Street View.

    6.3. Oculus Rift and NASA’s virtual reality of Mars

    Figure 7: Oculus Rift and Virtuix Omni with NASA’s virtual reality of Mars  

    The employees of NASA’s Jet Propulsion Laboratory have taken the virtual reality of the

     physical world to the next level by not limiting it to the space and the conciseness of the

    Earth. They have combined the stereoscopic 360-degree panoramic views of Mars taken by

    “Curiosity” rover, along with satellite images, with the Oculus Rift VR device for developers

    available in the market to map the surface of the terrain on Mars. [15]

    Initially, they used the Xbox 360 controller which allows them to move around while the

    Oculus is used to look around. But later on they replaced the Xbox 360 controller with thevirtual reality treadmill “Virtuix Omni” [16] to move around. This made it feel as if you were

    actually walking or running on Mars.

    The limitation to this is that it is not available for public use. They have actually put together

    a very interesting system but it is still just like other system which uses other controllers to

    move around. Although the devices used are affordable to some, it would be better if the same

    thing could be done by using just one single device.

  • 8/16/2019 Report Cn4

    16/51

  • 8/16/2019 Report Cn4

    17/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 17

     

    6. Improve the system to allow caching of “upcoming” images based on movement prediction.

    6.1. Determine methods plausible towards pre-loading and storing “upcoming”

    images.

    7. Improve the head movement collaboration with the loading of the image from the

    street view to avoid/reduce and kind of sickness, such as motion sickness.

    7.1. Determine the amount of time taken to download the image from the Google

    Street View under different circumstances. E.g., different internet speed, different

    computational power of the computer.

    7.2. Determine the time taken to change the image when the movement is

    triggered.

    8. Create an overlay that allows the animated layers to be present at certain locations.

    8.1. Determine the best methods to support the animations.

    8.1.1.Experiment with different display methods and different animation filetypes. E.g., .GIF, .MOV or .AVI.

    8.2. Implement the dynamic overlays onto the Street View images.

    8.2.1. Use the discovered methods to display overlays onto the canvas, above the

    Street View base.

    9. Revise the system’s functionality and improve code where applicable. 

    9.1. Revise user interface and user experience. Use surveys to determine necessary

    changes.

    9.2. Revise backend loading methods. Improve when possible.

    9.3. Revise storage methods. Consider compression, bulk loading and other

    techniques.

    10.Add a functionality to add in more overlays to the system.

    10.1. Create an interface to add more overlays, given the following information:

    Coordinates, Position, Size, Name, and Animation File (Possibly Caption as an

    optional).

  • 8/16/2019 Report Cn4

    18/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 18

     

    Figure 8: Activity flow diagram

  • 8/16/2019 Report Cn4

    19/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 19

     

    7.1.2. Obstacles

    While working on a huge project, such as this, we knew that we would face many obstacles.

    Most of these obstacles have been overcome by the amount of research and the effort that we

    have put in to ensure the success of this project.

    One of the major obstacles while we were developing this project with Unity was that it had

    many limitations as to what could be done to the game objects that we created as a base forGoogle Street View panoramic images. Since Unity focuses more on 3D, it was difficult to

    stitch the images we retrieved from Google Street View as textures and display them on the

    game objects. After days of research on the topic we realized that this was one of the majorlimitations of Unity. Therefore we decided to look for other development environments that

    would not have such limitations. We came across Processing, a programming language and

    development environment. It has very few limitations and can perform as well as Unity for

    the tasks that we need it to perform.

    With the use of Processing we could no longer use the SDK provided by the Oculus Rift

    Development team, therefore we had to come up with a way to be able to display images on

    Oculus Rift and retrieve the Oculus Rift’s sensor values. Many hours of research lead us todiscover that in order for the image to be displayed on the Oculus Rift with the correct

    settings we need a shader that would distort the image to form left shader and right shader for

    left eye and right eye respectively. The shader provided by “ixd-hof”[17] helped guide us

    towards our aim of display images on Oculus Rift a success.

    The next major obstacle was that now we no longer had the game objects to as the base of the

    images, therefore we had to create our own way of making the images we retrieved into

     panoramic images. To solve this, we decide to write our own code to stitch images and use a

    scene as the base so that the images could be converted to form a panoramic image.

    While working with the Java wrapper called “JRift” [18], we faced an obstacle regarding the

    compilation of the Oculus SDK to Java Native Interface (JNI) [19]. The problem was due tothe difference in Java on Windows 32-bit and 64-bit platform. This problem was resolved by

    compiling on a Mac OS X platform.

    The greatest obstacle we faced when creating this system was methods in retrieving the

    Google Street View’s data in a fast and efficient way that allows the user to freely walk

    around without any cause of delay due to internet data transfer issues. The other main obstacle

    is discovering the most natural head movements to trigger horizontal and vertical movement

    within the virtual reality.

    During the process of development, we have found various other restrictions and difficulties.

    In order to make sure that the problems we encounter were limited to be as few as possible,

    we performed code revisions at milestones and adjust our codes and designs accordingly.

  • 8/16/2019 Report Cn4

    20/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 20

     

    7.2. Tools and Techniques

    7.2.1. Tools

    Software 

    Unity  –   Game Engine: Unity is a game development engine with built-in Integrated

    Development Environment (IDE). It has tools to support the rendering of three dimensional(3D) graphics on multiple platforms.

    Processing  –   Programming Language and Development Environment: Processing is a

     powerful tool for development with Integrated Development Environment. It is built on Java

    language. It supports 3D rendering through OpenGL.

    Software Development Kit 

    Oculus Rift SDK: Oculus Rift SDK [20] is a development kit that provides support to the

    Unity Engine and the Unreal Engine for development purposes. They are also available for

    development in other environments.

    Application Programming Interface 

    Google Street View API: Google Street View is a panoramic view of a location integrated in

    Google Maps. The Google Street View API [21] provides support to fetch the appropriate

    image of a certain location stated by its coordinates in longitude and latitude.

    Google Geocoding API: Google Geocoding [22] is an API which allows us to convert human

    understandable addresses (like “London Eye, UK”) into geographic coordinates. 

    Languages 

    C#: C# (pronounced “see-sharp”) [23] is an object-oriented computer programming language

    created by Microsoft. It allows compatibility with Microsoft .NET software framework which

    has pushed C# to be a globally popular programming language. The Unity Engine supportsC# scripts.

    JavaScript: JavaScript [24] is a computer programming language that is generally used as a

     part of web browsers. It is a client-sided script that is used to alter the displayed information.

    It is also supported by the Unity Engine.

    Processing  –   Java based language: The Processing Language was designed to facilitate the

    creation of sophisticated visual structures.

    Libraries 

    JRift by 38leinaD  –   “JRift”  is a Java Native Interface (JNI) wrapper, which acts as the

    middleman between the Oculus SDK, which is written in C++, and the Java programminglanguage.

    BezierSQLib by Florian Jenett  –   “BezierSQLib” [25] library is a Processing library which

    acts as a JDBC driver wrapper. This allows access to MySQL, SQLite [26] and PostgreSQL

    databases. We use SQLite for our application’s database, as it is a local database.

    GifAnimation by Patrick Meister –  “GifAnimation” [27] library is a Processing library which

     provides render to Graphical Interchange Format (GIF) for animation purposes.

  • 8/16/2019 Report Cn4

    21/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 21

     

    7.2.2. Techniques

    Image Caching: Image caching is done by temporarily saving the images in the local drive.

    The images are retrieved when the locations searched already has the images cached in the

    local drive. This is done so that the images need not be loaded each time the user would like

    to visit the place. The images are deleted once the session is destroyed i.e., once the user

    closes the application. They may also be deleted when they use up excess storage space than

    the set limit. We have also planned to save the images of frequently visited places so that the

    system can load it up as quickly as possible. This method saves both time and effort required

     by the system to load the image from the Street View.

    Predictive Caching: Predictive caching is an algorithm the system runs when it is idle and the

    user is currently visiting a searched location. This algorithm preloads images into a cache

    depending on their current heading direction. The system can display the images when the

    forward-movement is triggered.

    User Experience Survey:

      For the “branches” stated in the Methodology section, we shall conduct a series of

    surveys and evaluate the results to create reports of best/most natural methods.  The eye distance or inter-pupillary distance (IPD) to be set initially by the system may

     be determined by the surveys conducted.

      The head gestures such as nodding or shaking head that would trigger forward

    movement may also be determined by the surveys. The findings from this survey will

    contribute to research on “natural movements and their relationship with computer

    inputs”. 

  • 8/16/2019 Report Cn4

    22/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 22

     

    7.3. Technical Specifications

    7.3.1. Oculus Rift

    The Oculus Rift development center provides an Oculus Development Kit which includes the

    Oculus SDK, official Unreal and Unity engine integrations, samples, documentation, videos,

    developer wiki, and community forums to a developer for building a virtual realityenvironment.

    The version of Oculus SDK that we will be using is Oculus SDK 0.2.5.

    Figure 9: Oculus Rift Development Kit

    The Oculus Rift’s Development Kit technical specifications are as follows:

    ●  Head Tracking: 6 degrees of freedom (DOF) ultra-low latency.

    ●  Field of View: 110 degrees diagonal / 90 degrees horizontal

    ●  Resolution: 1280x800 (640x800 per eye)

    ● Inputs: DVI/HDMI and USB

    ●  Platforms: PC and mobile

    ●  Weight: ~0.22 kilograms

  • 8/16/2019 Report Cn4

    23/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 23

     

    7.3.2. Google Street View

    Google provides an Application Programming Interface (API) to developers for integrating it

    with any system that they build. This integration can be done by using an API key [28]

     provided to a developer along with other parameters to get the image of a certain location.

    The API is available at “http://maps.googleapis.com/maps/api/streetview?parameters”. 

    The parameters used in the API to get the images are as follows:

    Required Parameters:

    ●  size, this must be specified in the format of {width}x{height}. E.g. 400x400.

    ●  location, this can either be in the form of a text string (such as Chagrin Falls, OH) or a

    latitude/longitude value (40.457375,-80.009353).

    ●  sensor , this parameter indicates whether the request came from a location sensing

    device or not. For our use we shall use the value false.

    Optional Parameters:

    ●  heading, this indicates the compass heading of the camera. Values from 0 to 360 with0 and 360 as North, 90 indicating East and 180 South.

    ●  fov, this indicates the field of view. The field of view is how wide of an angle the

    image is. The default is 90, the maximum is 120. We will most likely be using 90 and

    110, but experiments will be made for the optimized size.

    ●   pitch, this specifies the vertical angle of the camera. The default is 0. Positive values

    angle the camera up whilst negative values angle down. 90 being straight up and -90 

     being straight down.

    ●  key, this identifies your application for quota purposes, and enables reports in the

    APIs Console. The API Key for Google.

    7.3.3. Google Geocoding

    Google provides an API, Geocoding, which allows us to convert human understandableaddresses (like “London Eye, UK”) into geographic coordinates. The API is in the form of an

    HTTP request. The HTTP request format for Geocoding API is as follows:

    “http://maps.googleapis.com/maps/api/geocode/{output}?{parameters}”. 

    The output formats of the response by geocode are as follows:

       json [29]

      xml [30]

    The parameters used in the API to get the coordinates are as follows:

    Required Parameters: 

      address, the address that you want to geocode.

      sensor , this parameter indicates whether the request came from a location sensingdevice or not. For our use we shall use the value false.

    Optional Parameters:

       bounds, this indicates the area within which the geocode results are more prominent.

      key, this identifies your application for quota purposes, and enables reports in theAPIs Console. The API Key for Google.

      language, this indicates the language of the result.

      region, the region code specified as a two-character value.

      components, this contains the filters from the resulting geocodes for restricting theresults from the geocode.

  • 8/16/2019 Report Cn4

    24/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 24

     

    8. Project Schedule

    Task   Description  Person  Duration  Deadline  Status 

    1  Research on working with Oculus Rift,Unity, and Google Street View 

    DD, TC  2m  1 Oct 13  100%. 

    2  Study the programming languages C# andJavaScript 

    DD, TC  2w  15 Sep 13  100% 

    3  Learn how to use Unity  DD, TC  1m  25 Sep 13  100% 

    4  Learn about the usage of Google StreetView API 

    DD, TC  1w  30 Oct 13  100% 

    5  Use Unity Engine to load images fromGoogle’s Street View 

    DD, TC  1w  1 Oct 13  100% 

    6  Prepare slides for proposal presentation  DD, TC  2d  13 Oct 13  100% 

    7  Complete the final proposal  DD, TC  3w  13 Oct 13  100% 

    8  Understand the workings of Oculus Rift  DD, TC  1m  30 Nov 13  100% 

    9  Use Unity Engine (changed to Processing)to display images on Oculus Rift 

    DD, TC  2w  15 Dec 13  100% 

    10  Use Unity Engine (changed to Processing)to configure the movements of the Oculus

    Rift 

    DD, TC  1m  10 Feb14  100% 

    11  Create an overlay that allows the dynamiclayers to be present at certain locations 

    DD, TC  1m  20 Feb 14  100% 

    12  Add other functionalities to the system  DD, TC  1w  22 Feb 14  100% 

    13  Testing  DD, TC  5m  9 Mar 14  100% 

    14  Improve movement, image and animationloading times 

    DD, TC  2w  9 Mar 14  100% 

    15  Complete the report  DD, TC  3w  9 Mar 14  100% 

  • 8/16/2019 Report Cn4

    25/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 25

     

    9. Project Progress

    9.1. Research and Understanding

    Since the start of the development of this project we have researched on some of the tools that

    will be used in this project. We have researched on Unity and Google Street View’s API

    along with programming languages such as C# and JavaScript.

    We have gained knowledge in the basics of Unity engine. Unity engine allows us to integrate

     programming scripts with the Game Objects. We have created multiple scenes with different

     properties and objects to test the functionality of the script on different objects under both

    similar and different circumstances. In one of the scenes, we tested the supplied camera

    control script to navigate in a two dimensional plane. In another scene, we created our own

    control script with the same game object. These were some of the scenes we created to

    understand the basics of Unity (See Figures 10, 11, 12, 13, 14, and 15).

    Figure 10: Panning of image in 2D development branch - 1

  • 8/16/2019 Report Cn4

    26/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 26

     

    Figure 11: Panning of image in 2D development branch - 2

    Figure 12: Panning of image in 2D development branch - 3

  • 8/16/2019 Report Cn4

    27/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 27

     

    Figure 13: “lookAt” in spherical development branch –  1

    Figure 14: “lookAt” in spherical development branch - 2

  • 8/16/2019 Report Cn4

    28/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 28

     

    Figure 15: “lookAt” in spherical development branch –  3

    In order to script in Unity we had to learn programming languages that Unity supports. Unity

    supports C# and JavaScript. Since we had the knowledge of programming languages such as

    C and Java, we were able to understand the structure of C# and JavaScript.

    We have also learned the usage of Google Street View’s API and gained the basic knowledge

    required to retrieve the images. The Google Street View’s API is in the form of a web URL

    with parameters specified by us to retrieve the image of a certain location (See Figure 16).

    The parameters include the location, size, field of view, API key. The API key is one of themain parameters required for development. It enables reports in API consoles and increases

    the download quota.

    Figure 16: Load Street View image script “loadImage” 

  • 8/16/2019 Report Cn4

    29/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 29

     

    Apart from the research and understanding of tools, we have implemented loading Google

    Street View images to a Game Object. As mentioned in the methodology, that we will be

    implementing two branches for displaying images using a 2D canvas and a sphere as the

    Game Object, we have compared these two methods of displaying images while using the

    control script that we have written to look around, but we have yet to determine the best way

    to display the images.

    Figure 17: Look control for sphere script “lookSphere” 

  • 8/16/2019 Report Cn4

    30/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 30

     

    9.2. Obstacles and Solutions

    After working on implementing the system using Unity, we came to a halt, due to limitations

    of the Unity engine. Unity did not allow us to apply certain image/texture manipulations. We

    could not stitch multiple images together to create a new, combined image. After hours of

    research, we decided to migrate our project to Processing.

    Figure 18: Processing editor

  • 8/16/2019 Report Cn4

    31/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 31

     

    We discovered that in order to re-create our project in Processing, we would possibly need a

    shader to replicate the “barrel” shader  used with the Oculus Rift SDK. We found the shader

    online, along with an example. We took the example and modified the code to fit our needs:

    to display a panoramic image, created from the stitching of multiple resized Google Street

    View images.

    Figure 19: Oculus Rift Shader by “ixd-hof” 

  • 8/16/2019 Report Cn4

    32/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 32

     

    We created a function that loads all the necessary images from an inputted location (later shall

     be generated from search functionality). We applied the function to the existing code and

    added a simple way to test looking around. The following is the result:

    Figure 20: Implementation of system using barrel shader

    Figure 21: Prototype implementation of the system

  • 8/16/2019 Report Cn4

    33/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 33

     

    9.3. Completion Steps

    The remaining part of the project was completed one by one, following the schedule and the

    necessity of the part required for the implementation of main functionalities as well as other

    useful functionality such as suggestive search and calibration for eye distance.

    We created search functionality that would allow user to search for the location they wish to

    travel in this virtual tour. This search functionality allows user to enter human-readable

    addresses (like “London Eye, UK”)  or geographical coordinates consisting of latitude and

    longitude (like “40.720032, -73.988354”). This then returns suggestions for the nearest

    location that has Street View images as well as the suggestions for multiple places with the

    same name. User may then select their desired location and begin the tour.

    Earlier in the progress of this report, we mentioned that we would possibly need to use a

    shader to display the images for the left and right eye, so we created our own shader instead

    of using the existing one for the benefit of implementation of the remaining part of the

     project.

    Figure 22: Applying shader to display the image on left and right eye –  1

  • 8/16/2019 Report Cn4

    34/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 34

     

    Figure 23: Applying shader to display the image on left and right eye –  2

    The necessary step towards completion of this project is the implementation of a method to

    read sensor values from Oculus Rift. This method can be implemented by using the Oculus

    Software Development Kit (OculusSDK) made available by the Oculus Rift team, but since it

    was programmed in C++ therefore we needed a Java wrapper which will compile the C++

     program to be a Java readable format. The Java wra pper called “JRift” made available by

    “38leinaD” made it possible for us to easily get the sensor values from the Oculus Rift.

    Since we now had the functionality to retrieve Euler values from the Oculus Rift, we were

    able to display the images on the Oculus Rift according to its position. The main problem

    which then arose was the calibration of the eye distance which would feel most natural to the

    user therefore we created a functionality that allows user to calibrate the eye distance.

    Figure 24: Three rotational degrees of freedom of human head

  • 8/16/2019 Report Cn4

    35/51

  • 8/16/2019 Report Cn4

    36/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 36

     

    10. Technical Description

    10.1. Overview

    We started the development of this project by following the methodology which indicated our

     process. We focused on understanding the basics of the required applications and

     programming languages. Then we moved on to the implementation of our project.

    10.2. Implementation

    We began the implementation of our project on Unity but had to migrate it project to

    Processing due to the limitations of Unity.

    10.2.1. Unity

    Unity being a powerful 3D engine and user-friendly application helped us move forward with

    a quick start.

    After thorough research and understanding of Unity and testing of individual part needed tostart of the project, we created a unity project with two main scenes, one scene for the flat

     panoramic branch and the other for the spherical branch.

    Figure 26: Flow of Implementation

  • 8/16/2019 Report Cn4

    37/51

  • 8/16/2019 Report Cn4

    38/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 38

     

    With this the texture was displayed on the outside of the sphere. Now the final thing was to

    display it on the inside of the sphere. During our research we came across some line of code

    that manipulates the triangle matrix of the sphere so that the main texture would be displayed

    on the inside of the sphere and so we were successful in displaying the image inside the

    sphere, all thanks to “BPPHarv” [31].

    The next step in our development would be to integrate this with the Oculus Rift SDK and build the project to run with Oculus Rift device. Oculus Rift SDK provides the

    OVRCameraController and OVRPersonController prefabs. By using OVRCameraController

     prefab we could use the Oculus Rift device to control the camera movement in the sphere.

    Figure 29: OVRCameraController prefab

  • 8/16/2019 Report Cn4

    39/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 39

     

    10.2.1.2. Flat Panorama Branch

    The second branch of implementation on Unity is the flat panoramic display of image by

    using a “Cube” as GameObject. In this case even though a cube is used as the game object,

    we set one side of the cube to be 0 and the remaining sides as 400 to make it 2D.

    The camera is at a distance from the cube but facing the cube. The camera is controlled by a

    separate camera controller script as shown in the figure 30. In this case the camera istranslated instead.

    Figure 30: Panning Camera Controller

    The Street View images retrieval process is the same as done in the spherical branch and thus

    it uses the same script to load the image and assign to the main texture of the current

    GameObject which is “Cube”. In this case a different camera configuration script was neededto make the OVRCameraController move across the panoramic image displayed on the cube.

    At this point we realized that the limitations of Unity engine was stopping us from moving

    forward with the panning as well as the stitching of the images to be displayed on the main

    texture and so after long hours of research and trial and error we decided to try out other

    development environment which we had come across since the start of the project instead. We

    decided that it would be best to migrate to Processing.

  • 8/16/2019 Report Cn4

    40/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 40

     

    10.2.2. Processing

    Processing is a java based programming language and development environment. The

    implementation process had to be completely redefined for Processing since there is no SDK

     provided by the Oculus Rift Development Team. The main process for loading the images

    and displaying remains the same but in this case we came to a conclusion that using the flat

     panoramic branch of development would be the best practice because the support for two

    dimensional graphics of Processing is quite efficient.

    10.2.2.1. Retrieve and display images from Street View API

    We load the images from Google by using their Street View API. The images are retrievedfrom a URL by passing GET values. The key values to be passed are the “Location” (Either

    latitude/longitude values or location as a String, e.g. Manhattan, NY), the “Heading” and the

    “Pitch”. The images retrieved from the Street View are stored to allow local loading

    thereafter. This acts as a permanent cache for almost instant loading. The script written in

    Processing to get the images and store them is in figure 31.

    Figure 31: Load Image in Processing

  • 8/16/2019 Report Cn4

    41/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 41

     

    10.2.2.2. Keyboard input look-around

    The script for looking around is implemented for the arrow keys of a keyboard. This is

    temporary. The next implementation depends on acquiring sensor values from the Oculus

    Rift. These values will then be translated into pitch and heading values. The 3rd Euler value

    from the Oculus Rift, “Roll”, will then be implemented by translating the value into 2D

    rotation.

    Figure 32: Look around in Processing

    10.2.2.3. Create location search functionality

    The implementation of location search functionality uses Geocoding API to convert human-

    readable addresses, the user inputs using a keyboard, into geographical coordinates. These

    coordinates are then passed into the Street View API request to act as a starting position for

    the user. The current implementation has the ability to change the location by inputting a

    search query. The query can be either in the format of latitude/longitude, e.g. “40.5, 160.675”,

    or location name, e.g. “Big Ben, London” or “Manhattan, NY”. The function retrieves the

    geographical coordinates from geocode and then displays two possible locations suggestion

    for the user to select.

  • 8/16/2019 Report Cn4

    42/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 42

     

    10.2.2.4. HMD Calibration

    The Oculus Rift provides HMD calibration which is visually explained in figure 33. The

    concept of HMD calibration is to provide a single image from two images. These images

    should have overlapping areas in which, if correctly proportioned, create the illusion of

    human natural vision. Human natural vision is how people have two eyes, yet see one wide

     picture.

    Figure 33: HMD Calibration

    10.2.2.5. Shader

    The implementation of the shader for the Oculus Rift is based on the calculations in the

    following snippets of code.

    Figure 34: Oculus Rift Shader for left eye

  • 8/16/2019 Report Cn4

    43/51

  • 8/16/2019 Report Cn4

    44/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 44

     

    10.2.2.7. Calibrate eye distance or inter-pupillary distance (IPD)

    With the implementation of the look-around functionality with the shader, the images

    displayed on the Oculus Rift was now viewable with the interaction of head movement of the

    user with the image retrieved from Street View. These images are adjusted for the left eye and

    right eye so that they seem as one image when viewed with Oculus Rift. To provide access to

    adjust the eye distance or inter-pupillary distance (IPD), which is a major factor for displayingthe images with the shader for Oculus Rift, we created a calibration page under settings menu

    for the users. This will allow them to adjust the images displayed on the screen of the Oculus

    Rift so it can be give the users the best experience.

    10.2.2.8. Movement trigger and its functionality

    The movement-trigger and its functionality are implemented by using the change in the pitch

    values retrieved from the Oculus Rift. The gesture recognized by this functionality is a nod.

    Once the movement is triggered, it triggers to move the user’s location forward i.e.,  

    recalculates the next geographical coordinates that the position of the user must be shifted to

    in the direction of the current heading. This movement depends on heading and the distance it

    must travel in meters, which are converted to degrees of latitude and longitude. The latitude

    and longitude of a location 10meters ahead is determined by the following equations:

    () 

     

    ()

    (() ) 

    10.2.2.9. Overlays

    The additional information overlays functionality is implemented by retrieving the path of the

    overlay, location, and heading from the local database created using SQLite library known as

    “BezierSQLib”. The animated images are then loaded from the local storage via the path

    retrieved. These GIF animations, rendered by using the “GifAnimation” library, are displayed

    on top of the Street View images.

  • 8/16/2019 Report Cn4

    45/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 45

     

    Figure 38: Snippet of code for creating the class Overlay

    Figure 39: Snippet of code for displaying Overlay on top of the Street View on left eye

    Figure 40: Snippet of code for displaying Overlay on top of the Street View on right eye

  • 8/16/2019 Report Cn4

    46/51

  • 8/16/2019 Report Cn4

    47/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 47

     

    Figure 43: Menu

    Figure 44: Search

  • 8/16/2019 Report Cn4

    48/51

  • 8/16/2019 Report Cn4

    49/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 49

     

    Figure 48: Add Overlays

    Figure 49: Credits

  • 8/16/2019 Report Cn4

    50/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    School of ICT, SIIT 50

     

    11. References

    1. Street View. The homepage of Google Street View. [Online]

    Available: https://www.google.com/maps/views/home?gl=us&hl=en-us 

    2. Whatis.techtarget (March 2009). Definition of Google Street View. [Online]

    Available: http://whatis.techtarget.com/definition/Google-Street-View 

    3. OculusVR  . The Oculus Rift by Oculus VR. [Online]

    Available: http://www.oculusvr.com/ 

    4. Processing.  Processing: Open source programming language and development

    environment [Online]

    Available: http://processing.org/ 

    5. Wikipedia (December 2010). Description of Virtual Reality. [Online]

    Available: http://en.wikipedia.org/wiki/Virtual_reality 

    6. Kickstarter. The homepage of Kickstarter website. [Online]

    Available: http://www.kickstarter.com/ 

    7. Kickstarter project: Oculus Rift (September 2012). Oculus Rift: Step into the Game by

    Oculus. [Online]

    Available: http://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game 

    8. Unity3d. Unity: Game engine, tools and multiplatform. [Online]

    Available: http://unity3d.com/unity/ 

    9. Leap Motion. [Online]Available: http://www.leapmotion.com / 

    10. Microsoft Kinect. A device that gives computer eyes, ears, and a brain. [Online]

    Available: http://www.microsoft.com/en-us/kinectforwindows/ 

    11. Roadtovr (March 2013). Article on Google Maps with Oculus Rift and Leap Motion by

    Ben Lang. [Online]

    Available: http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-with-

    oculus-rift-and-leap-motion/ 

    12. Oculus Street View. The website that displays the Oculus Street View. [Online]

    Available: http://oculusstreetview.eu.pn 

    13. Github. The profile page of troffmo5. [Online]

    Available: https://github.com/troffmo5 

    14. troffmo5/OculusStreetView.  The rift server files and other Oculus Street View files.

    [Online]

    Available: https://github.com/troffmo5/OculusStreetView 

    15. Gizmodo  (June 2013). Article on Oculus Rift and NASA’s Virtual Reality of Mars by

    Eric Limer. [Online]

    Available: http://gizmodo.com/oculus-rift-nasa-s-simple-vr-rig-can-let-you-explore-1042561045 

    https://www.google.com/maps/views/home?gl=us&hl=en-ushttps://www.google.com/maps/views/home?gl=us&hl=en-ushttps://www.google.com/maps/views/home?gl=us&hl=en-ushttp://whatis.techtarget.com/definition/Google-Street-Viewhttp://whatis.techtarget.com/definition/Google-Street-Viewhttp://whatis.techtarget.com/definition/Google-Street-Viewhttp://www.oculusvr.com/http://www.oculusvr.com/http://www.oculusvr.com/http://processing.org/http://processing.org/http://processing.org/http://en.wikipedia.org/wiki/Virtual_realityhttp://en.wikipedia.org/wiki/Virtual_realityhttp://en.wikipedia.org/wiki/Virtual_realityhttp://www.kickstarter.com/http://www.kickstarter.com/http://www.kickstarter.com/http://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-gamehttp://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-gamehttp://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-gamehttp://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-with-oculus-rift-and-leap-motion/http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-with-oculus-rift-and-leap-motion/http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-with-oculus-rift-and-leap-motion/http://oculusstreetview.eu.pn/http://oculusstreetview.eu.pn/http://oculusstreetview.eu.pn/https://github.com/troffmo5https://github.com/troffmo5https://github.com/troffmo5https://github.com/troffmo5/OculusStreetViewhttps://github.com/troffmo5/OculusStreetViewhttps://github.com/troffmo5/OculusStreetViewhttps://github.com/troffmo5/OculusStreetViewhttps://github.com/troffmo5/OculusStreetViewhttps://github.com/troffmo5https://github.com/troffmo5http://oculusstreetview.eu.pn/http://oculusstreetview.eu.pn/http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-with-oculus-rift-and-leap-motion/http://www.roadtovr.com/google-io-2013-first-photos-of-google-maps-with-oculus-rift-and-leap-motion/http://unity3d.com/unity/http://unity3d.com/unity/http://unity3d.com/unity/http://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-gamehttp://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-gamehttp://www.kickstarter.com/http://www.kickstarter.com/http://en.wikipedia.org/wiki/Virtual_realityhttp://en.wikipedia.org/wiki/Virtual_realityhttp://processing.org/http://www.oculusvr.com/http://www.oculusvr.com/http://whatis.techtarget.com/definition/Google-Street-Viewhttp://whatis.techtarget.com/definition/Google-Street-Viewhttps://www.google.com/maps/views/home?gl=us&hl=en-ushttps://www.google.com/maps/views/home?gl=us&hl=en-us

  • 8/16/2019 Report Cn4

    51/51

     Senior Project 2013 Google’s Street View With Oculus Rift  

    16. Virtuix Omni. The treadmill used to control the movement in a virtual reality. [Online]

    Available: http://www.virtuix.com 

    17. Barrel by ixd-hof. Shader for Oculus Rift. [Online]

    Available: https://github.com/ixd-

    hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pd

    18. JRift by 38leinaD. [Online]

    Available: https://github.com/38leinaD/JRift 

    19. Java Native Interface (JNI). [Online]

    Available: http://docs.oracle.com/javase/6/docs/technotes/guides/jni/ 

    20. Oculus SDK. [Online] Available: https://developer.oculusvr.com/ 

    21. Street View API. [Online]

    Available: https://developers.google.com/maps/documentation/streetview/ 

    22. Geocoding API. [Online]

    Available: https://developers.google.com/maps/documentation/geocoding/ 

    23. C Sharp. [Online]

    Available: http://en.wikipedia.org/wiki/C_Sharp_(programming_language) 

    24. JavaScript. [Online]

    Available: http://en.wikipedia.org/wiki/JavaScript 

    25. BezierSQLib by F. Jenett. A Processing library which acts as JDBC driver wrapper.

    [Online]Avaliable: http://bezier.de/processing/libs/sql/ 

    26. SQLite. [Online]

    Available: http://www.sqlite.org/ 

    27. GifAnimation. [Online]

    Available: http://extrapixel.github.io/gif-animation/ 

    28. Google API Key. API key for Google Street View. [Online]

    Available: https://developers.google.com/maps/documentation/streetview/#api_key 

    29. JSON. [Online]

    Available: http://json.org/ 

    30. XML. [Online]

    Available: http://en.wikipedia.org/wiki/XML 

    31. Flip triangles to draw inside sphere. [Online]

    Available: http://answers.unity3d.com/questions/330025/flip-normals-unity-lightwave-

    h ht l

    http://www.virtuix.com/http://www.virtuix.com/http://www.virtuix.com/https://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://developers.google.com/maps/documentation/streetview/#api_keyhttps://developers.google.com/maps/documentation/streetview/#api_keyhttps://developers.google.com/maps/documentation/streetview/#api_keyhttps://developers.google.com/maps/documentation/streetview/#api_keyhttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttps://github.com/ixd-hof/Processing/blob/master/Examples/Oculus%20Rift/OculusRift_Basic/OculusRift_Basic.pdehttp://www.virtuix.com/