Human Studies

From Oispacewiki

Jump to: navigation, search

This page lists the human studies that have been done.

Research questions

1. How does environmental learning and route finding ability differ as a function of simulation fidelity? Specifically, is a high fidelity simulation model necessary for promoting efficient indoor navigation or is a "sparse" model sufficient? To address this question, we are testing four levels of models: each manipulating the amount of environmental information that is available to the user. These include: high fidelity simulation models (HM), low fidelity simulation models (LM), wireframe models (WM) and sparse models (SM). The four types of models represent a clear progression of decreasing visual granularity, what we call “simulation fidelity”.

2. Which viewing perspective should be used on a real-time display to best support navigation of complex indoor buildings? We are investigating the pros and cons of both First Person and Third Person (bird’s-eye) viewing perspectives.

3. Should the optimal display adopt a heading-up (track up) or a north-up (fixed) model? Heading-up means that the information displayed on the PDAs will synchronize with your orientation during navigation, whereas North-up means that the information on the PDA always remains in a fixed north-up orientation irrespective of your facing direction.

Experiment design

These types of questions motivate the current line of behavioral experiments. The first experiment manipulates the visual granularity of information displayed on a PDA to assess how environmental learning and route finding differs as a function of simulation fidelity. This study is being done in virtual reality, as it allows us to make multiple building environments of significant complexity, easily vary the information content rendered on the simulated PDA,and simplifies logging of user spatial behavior and collection of test data during experimental sessions. Significant time has been spent in making models of four levels of visual granularity, ranging from a high fidelity simulation, which richly renders the details of the environment, to a sparse rendering, which only shows the floor plan topology. Our goal is to find the minimal information display which supports the highest level of spatial learning and navigation performance. These human behavioral data will be important in subsequent experiments for knowing what information should be rendered in a spatial display supporting indoor navigation and will also feed back to the ontology development and modeling work. For more information please see here (Poster), here (video), and here (Talk).

Personal tools