Image Image Image Image Image Image Image Image Image Image
Scroll to top


No Comments

Challenges of Mixed Reality Project Development

Challenges of Mixed Reality Project Development

As David H. Jonassen wrote in his book (Learning to solve problems: An instructional design guide) [1], the two most critical attributes of a question are how the problem should be defined and what is the value in solving it. The process of understanding these two attributes is accompanied by finding the solution. In some cases, ideas themselves are contingent on this process of finding and solving the problem. Even if it appears a seemingly impossible goal, a logic is still established through the act of framing and interrogating the problem. During the research and development of our project, new problems kept presenting themselves, often as a direct result of ‘solving’ an earlier one. Problem-solving is therefore a fundamental part of the development process and is the main content of this paper.

Some problem-solving models tend to treat all issues equally, attempting to summarise a common problem-solving process. The culmination of information-processing concepts was an attempt to articulate a uniform theory of problem solving (Smith, 1991) [2], however, it failed in the end due to its genericism. The classic ‘General Problem Solver’ (Newell & Simon, 1972) [3], elaborated on different problem-solving processes. It categorised two groups of thought processes involved in the problem-solving process; understanding and searching.

It is widely believed that some people are better at solving problems than others because they use more effective problem-solving strategies. Therefore, it is also important to use the appropriate strategies after finding problems in order to interrogate them effectively. In their article, Singley & Anderson mention that solvers who attempts to use weak-strategies, such as general heuristics (which can be applied across domains), generally fair no better than those who do not. However, solvers who use domain-specific methods, (stronger strategies) tend to perform much better. (Singley & Anderson,1989) [4]

Other contemporary research and theory in problem-solving claims that problem-solving skills are both domain and context-specific. Solving problems in a domain relies on cognitive operations that are specific to that domain. (Mayer, 1992; Smith, 1991; Sternberg & Frensch, 1991)[5, 2, 6] Using expertise in one domain to solve problems within that field are often referred to as ‘strong methods’, as opposed to domain-general strategies (weak methods).

As described above, attempting to use a generic strategy to solve all problems is not practical or effective. If different types of problems require specific solutions, it is important to categorise the problem properly in the process of finding a solution.

For example, problems may vary in terms of their structure, complexity, dynamicity, and specificity. Firstly, Jonassen (1997) [1] divided the problems into well-structured and ill-structured according to their structuredness, and states that different intellectual skills are required to solve each type. Secondly, problems vary in terms of their complexity – more complex problems involving additional cognitive activities than simple ones. (Kluwe, 1995) [7] Thirdly, problems also vary in their stability or dynamicity. When the conditions of the problem change, the solver must continually adjust his understanding of the problem and find a new solution, as the old solution may no longer work. Lastly, and crucially – solutions vary from area to area. To summarise, problems in one field or domain will differ in their structuredness, complexity, and dynamicity to another.

Therefore, it is more accurate and effective to find the corresponding solutions according to the characteristics of the problem after comprehensive analysis. This article will record and analyse the different kinds of problems that arose during the development of our project, and additionally explore how different problem-solving strategies can be used to deal with the various technical problems encountered also.

2. Project Aims

Our project researches how human perception of space works and how virtual reality could be deployed to hack the senses and make people believe they are moving in a much larger space than they really occupy. We will use Virtual Reality (VR) as a tool to explore spatial perception and test how susceptible they are to corruption.

Common VR experiences augment the real and the virtual visually (potentially dislocating the user), however our project aims to couple the visual with more tactile sensations such as touch to match virtual and physical boundaries as closely as possible, making the virtual experience feel as real as possible.

By misleading the senses through playing with real spatial references (such as length and orientation), our project aims to enable users to perceive a larger virtual world in a more confined ‘real’ space.

3. Research Challenges

3.1 What kind of equipment is used to provide the desired functions?

The task of our project is to subtly construct a virtual space proportionally similar with a real space – only enlarged. However, our primary goal is to create this geometric distortion subliminally, so that users do not notice the warping of the virtual space, thus completing the sensory deception and proving the hackability of our senses. Given the seemingly contradictory nature of these intended distortions, achieving this will be the fundamental problem to solve.

First, we need to identify the desired functions (controllable parameters) we require. We want to make the user feel that the virtual reality they are immersed in is very close to the real world they just left. In that world, the user’s freedom of action is a fundamental factor and thus needs replicating in the virtual environment. In order to do so we need to provide the three functions of head tracking, hand tracking and roaming ability in a specific space.

These functions and their control depend on the experimental hardware, therefore we tested and compared the relevant products on the market to utilise the most appropriate and effective equipment. As head tracking is incorporated as a basic function of most existing consumer-wear equipment we need not worry too much about specifying this as an additional item. Furthermore, through testing, our research also found that ‘Leap Motion’ possessed a reliable hand tracking ability and served its purpose effectively. For locomotive tracking such as walking, we tested various systems such as Valve Lighthouse and the Oculus Rift – both of which allow users to move about and explore a virtual environment in a limited physical space. After experimenting with much of the mainstream devices we chose the HTC Vive.

3.2 How to facilitate virtual exploration via natural locomotion?

The basic conditions of the hardware were met through testing the equipment mentioned previously, but the real challenge lay in the programming and sequencing of experiences in order to meet our aim of sensory deception.

If our goal is to change the perception of a space through more than simply visual feedback, we needed to examine other forms of interaction, including haptic feedback (touching and moving virtual objects) and auditory effects (such as echoes). Using these additional senses could reinforce the sensory distortion and thus make the effect more real.

However, before we could advance onto such multisensory illusions, we focussed on distorting the users sense of depth – a fundamental visual cue in navigating and perceiving space. In order to make this distortion effective, users must be able to navigate a space beyond simply reaching out one’s arms or tilting one’s head. Having the freedom to move through a real space will help reinforce the distortion in the virtual.

Put simply, how might a test subject freely experience a larger scale virtual world in a limited physical space? It is not the first time this question has been raised,so before commencing in our own experiments, we undertook some preliminary research on how other people approached and attempted to solve this problem. Some VR developers have designed algorithms that allow users to experience a larger virtual world in a limited space by misleading directions and distances. This method is collectively referred to as ‘redirected walking’. These existing algorithms are roughly divided into two types; Reactive & Predictive.

‘Reactive Algorithms’ are essentially greedy algorithms that make decisions based on the current state of the user at each point and try to make the optimal choice based on a particular heuristic. Steer-To-Centre (S2C) and Steer-To-Orbit(S2O) are examples of such ‘decisions’. These techniques use rotation and curvature gains in a greedy approach to steer the user either towards the centre (as in S2C) or in an orbit about the centre (as in S2O) of the tracked space.

Eidgenössische Technische Hochschule Zürich (ETHZ) published a paper [8] on redirected walking, which introduced their own reactive algorithm. In order to keep their algorithms working they used a ‘pure reorientation reset technique’. When a reset had to be done, they used a large green arrow to instruct the user to stop walking and turn on the spot in the direction the arrow was pointing. Users are forced to do a full 360◦ turn in the virtual space. However, it means they need to turn between 390◦ to 540◦ in the physical space, depending on the conditions of the route.


Figure 01: Layout of the study VE and a recorded virtual trajectory, 2014

Figure 01 is the layout of their virtual space and a recorded virtual trajectory. The green point marks the start/terminal position. Users walked clockwise. The red dashed rectangle shows the size of the physical space. They use the reset technique 5 times at position R1 to R5 to complete the journey.

In addition, the University of Southern California has also published a paper [9] describing their own reactive algorithm. They admit that the redirection used in their algorithm is not sufficient to keep users walking in the tracked space all the time, so they set a safety trigger inside and within a distance (by default 0.5 meters) from each side of the tracked area boundary. When a user passes the safety threshold, a reset must be activated to reorient the user back to the safe area. They use the most widely used reset: the ‘2:1-Turn’ proposed by Williams [10]. Users need to perform a 360◦ rotation on the spot while scaling the virtual rotation by a factor of 2, resulting in a 180◦ rotation in the physical space. As a result, if users wish to resume walking they must first stop and follow the reset instruction, redirecting them physically towards the safe zone, back inside the boundaries of the safety triggers.

The location of these reset points does not all appear at corners in the virtual space, but they are likely to appear as users move along a continuously straight trajectory. The existence of reset markers act as breakpoints in space, and through the implication of ‘real’ physical boundaries, disrupts a coherent virtual spatial experience.

Predictive Algorithms outperform reactive algorithms, specifically by reducing the frequency of resets. A predictive algorithm is also provided by the University of Southern California in their paper [9] about redirected walking. In their example, they fit a 12×4 metre virtual space into a 3.5×1.2 meter tracked physical space. The drawbacks of their approach are that it limits the user’s freedom of action to a large extent, because in that space, users can only move along the path they have designed.

Through researching several contemporary mature algorithms, we found that in fact the user’s action still has a lot of restrictions. Because of the importance of this issue in our project, it must be resolved and there is no ready-made solution available. We decided to put the necessary energy into developing and refining a new set of algorithms to achieve a new form of redirected walking to solve this inescapable problem.

3.3 What does our algorithm need to achieve?


Figure 02: Hamster in a ball rolling on the floor, 2012

First, we needed to find a theoretical model that conceptually matches two spaces of different sizes. As you can see in Figure 02. A hamster walks in a transparent sphere, while the sphere rolling on the ground. Much like the spatial constraints in our own project, the transparent sphere’s size remains cosntant.

In this analogy we conceive the virtual spaces as the ground upon which the physical barrier travels. Unfortunately in our experiment it would be too challenging to make the physical room osicallate akin to the hamster sphere. However, if we inverted this concept and instead imagined the virtual world spinning, the possibiliy of coding makes this more achievable. For example; we could analyse this relative motion pattern and calculate an inverse relative motion, then add it to keep physical space static while virtual space moves in the opposite direction. Each movement in the physical space also relates to a corresponding movement in the virtual world. If convincingly in sync, the user’s visual and auditory acceptance of the message comes from the simulated space. Theoretically therefore, it is possible to achieve our goal of hacking human perception using relative motion.

3.4 How to design our algorithm according to the theoretical model?

The relationship between two spaces in the theoretical model is similar to the physical space and the virtual space in our project. However, the model of relative motion between spaces in the theoretical model is the core, and the other parts do not necessarily apply to our project. As such, the theoretical model needs to be translated into mathematical language and optimized for relevance. First, because the floor of real world is a plane, the theoretical model is simplified from three dimensions to two dimensions.


Figure 03

In Figure 03, the left diagram is the simplified plan of the theoretical model. The red line represents the boundary of the virtual space, and the black line represents the boundary of the physical space. And then, considering the boundary of the physical space is usually a straight line, (like a wall), the shape of the space is turned into polygons.

The walls (boundaries) of the physical space are used as the main reference objects because they will be actually touched by user. In our model each wall that appears in the virtual scene needs to be the same size as the corresponding wall in the physical space. So in the right diagram of Figure 03, the side length of the positive quadrilateral is the same as the side length of the hexagon. At the same time, the center point of space is used as a secondary reference object. Other things, such as the floor and ceiling, do not help the user to identify the direction if there is no grain or pattern on them.


Figure 04

The function of the coordinates of the positive quadrangle or the positive hexagon in the polar coordinate system is periodic, (as shown in Figure 04). We subdivided the polygonal space into multiple isosceles triangular units so that the walls (as the main reference) can be maintained in the same scale in both physical and virtual worlds. Because the centre point has no volume, its relative position is the important attribute. Just like in physical space, the centre point of space is also located in the vertical line of each wall in the virtual space, only the distance between the centre point and the wall is longer in the virtual space than in the physical space. As a consequence, any length in this direction needs to be scaled proportionally in the virtual space. Using this approach, we arrive the first version coordinate diagram.

3.5 How to optimise our algorithm?

The first version of the coordinate system didn’t show much deficiency when performing walking tests in each isosceles triangle unit. As hoped, scaling in the vertical direction of the wall proved hard to detect by the user. However, we discovered a bug in the programme when testing walking between adjacent isosceles units. When the user crosses the unit boundary there is a significant and noticeable reversal of virtual space.

Because of the frequency, regularity and predictability of this problem, (always occuring at specific instances), it was unlikely an issue of device stability, but rather a weak point within the algorithm.

We decided to run an investigative simulation to see how the problem actually came about. In the physical space coordinate diagram, we constructed a straight line segment between the units that represent the path that the user walks in physical space. According to the algorithm we find the corresponding line segment in the virtual space coordinate diagram.


Figure 05

As shown in Figure 05, the linear path in the physical space is mapped to a broken line in the virtual space. When the user crosses the triangular threshold the virtual space suddenly turns 30 degrees, ruining the effect. Therefore, we had to reoptimise the algorithm.

Firstly, we need to know what kind of coordinate system would circumvent this issue. If the coordinate diagram can be like a standard polar coordinate system, the experience might be greatly improved.


Figure 06

As shown in Figure 06, if the coordinate diagram is a standard polar coordinate system, the linear segment in the physical space corresponds to a curve with a uniform curvature in the virtual space. In this case, the 30-degree deflection is subdivided into every step of the user’s walk. This method can avoid the sudden rotation of the virtual space and the experience might be greatly improved. However, such a coordinate system can cause the straight segment to be mapped to a curve. As we are discussing this coordinate system from top view, a wall corresponds to just a straight segment. If it is mapped to a curve, it means the wall in virtual space is a surface.

In that case, the plane in physical space correlates to a false surface in the virtual space, reminding the user of the existence of the algorithm and the inauthenticity of the virtual space. It goes against the idea that we want to hack human perception subtly. Therefore we need to have some of the characteristics of both of these frames types. Therefore we attempted to combine these two coordinate diagrams to form a new coordinate system with the advantages of both.

In mathematics, there is a common way of integrating multiple functions, multiplying each function by a given factor and then summing it. If the function of the polygon coordinate diagram is g(x), the function of the round coordinate diagram is h(x),the new function will be f(x) = k * g(x) + l * h(x). Value k and value l as coefficients can be constant values or variables. If we use value d to represent the distance between the user and the center point, and set k to the variable shown in Figure 07, (l = 1 – k), we can produce a new coordinate diagram as shown in the figure.


Figure 07

By adjusting the k value curve, the new coordinate diagram can preserve the characteristics of the circular diagram in some regions, while in others, it has the characteristics of the polygonal coordinate system. The way that k values are set in the graph, enables the user to move smoothly across the isosceles triangle units, eliminating the shunting reversals encountered previously. When the user is near the wall, the wall in the virtual space still can match the wall in the physical space, alleviating the previous disparity in shape. The k value curve can also be adjusted and calibrated according to the needs of the scene.


Figure 08

As we combine two functions in this way, when k is equal to 0, the difference between the new function and the first version one is the largest. As shown in Figure 08,
Δd = f(x) – g(x).
As highlighted previously, when the user crosses this area, the virtual space won’t suddenly turn around. However, as you can see in the picture, each point on the graph is offset by the previous basis. They’re all moving away from the center point, which means that the distance from the center point is going to be larger as a whole.

Yet the dimensions of the virtual space or the physical space have not changed, and the position of the boundary is the same as before. So in the coordinate diagram after the adjustment, near the outside, the density of the grid will increase. The density of the grid affects the distance of each step when the user is walking. Using a coordinate system with uneven network diagram in the algorithm will make users feel their walking speed is discordant with their virtual progression.

We therefore need to homogenise the network of the coordinate system. In this case, we just need the line in the diagram to be a curve of smooth curvature, not necessarily to be a circular arc with a fixed radius. By adding an i(x) function we can achieve the desired effect.
f(x) = k * g(x) + (1-k) * h(i(x)).


Figure 09

The corresponding coordinate diagram and Δd value curve of this function are shown in Figure 09. We will adjust the network map back to the state of the previous version as much as possible, but also want to curve in the network with smooth curvature. The function i(x) needs to be determined by the mean and variance of Δd. On the basis of the previous version, we used the smallest possible change to solve the key problem and created a second version of our algorithm.

3.6 How to strengthen our algorithm?

By this point, our algorithm has been able to use a positive quadrangle physical space to allow the user to get a positive hexagon virtual space. The area of a positive hexagon in a virtual space is about 2.6 times the area of a positive quadrilateral in the physical space. But the effect of a single room being magnified was not strong enough. By only using a single positive quadrilateral space, the physical space and the corresponding virtual space do not have a great impact on the feeling of user. We wanted to increase the difference in physical space and virtual space scales to amplify the effect of the distortion. We therefore need to further analyze the movement patterns of the users in these two spaces.


Figure 10

As the user walks in the vertical direction of the wall, the stride increases slightly. When the user walks in a parallel direction with the wall, their movements will be deflected. However, as you can see in Figure 10, the positions of entry and exit are the same in physical space but different in virtual space. The slight shifts add up to 120 degrees after the user walks around the space. This angle difference can be used to make the space more misleading to the user.


Figure 11

The different directions the user walks in the space will lead them to different exits. As shown in Figure 11, the only exit in the physical space corresponds to three exits in the virtual space. This means that the two adjacent quadrangular spaces in the physical space can form a binary system that can map to an almost infinite space in virtual space.

3.7 Can the virtual space in the algorithm be arbitrary?

The binary system version of the algorithm seems to be very close to our expectations, but whether the algorithms that apply in the quadrangle and hexagon can develop into a universal algorithm for the spaces of various shapes needs to be validated. Therefore, the adaptability of the algorithm needs to be tested with new virtual spaces.

Considering that in most cases that the available physical space is likely to be a quadrangle, the shape of the physical space in the algorithm will need to utilise the quadrangle by default. However, the shape of virtual space has many possibilities.


Figure 12

In Figure 12, the virtual spaces of the pentagon, hexagon, heptagon and octagon are generated using the same algorithm. We maintain that the side length of the virtual space shapes are equal to the side length of the physical space. Therefore polygons with more edges will have a bigger area. At the same time, this space is more distorted than physical space. This also influenced by the degree of scaling in the vertical direction of the wall and the degree of deflection in the parallel direction of the wall. So, while the spatial distortion might be greater with larger polygons, the disparity between the physical and the virtual may be so great as to disrupt the illusion.

Consider when the shape of the available physical space is no longer a square, but a rectangle with an uncertain length-width ratio. Given the alternating edge lengths, the the features of a corresponding pentagonal virtual space are likely to be less convincing. Because we want to make the shape of the physical space in the algorithm more free, the hexagon appears to be the most appropriate choice.

4. Future Possibilities

4.1 What is the potential of our algorithm?

After several simulations, tests and much debugging, our algorithm has been gradually refined. Based on the magnification of the virtual space scale, the core reference objects are almost perfectly matched with the real space. Users are given real / live haptic feedback while simultaneously afforded a great freedom of action in an impossible space.

After we completed our own testing, dozens of users were invited to act as test subjects, immersing themselves in our demo scene. We asked each volunteer for feedback immediately after their experience. Most felt the virtual scene was very convincing, forgetting the limits of the physical space quickly after being immersed in the virtual environment.

Upon removing the VR hardware most were shocked and perplexed by how far they had actually travelled. Certainly this algorithm can be further optimized based on the feedback collected from users, but beyond that, it still has the potential to evolve in directions described below.

From a single polygon space the algorithm can be developed to be more adaptive, so that any shape of the physical space can generate a corresponding virtual space.

Furthermore, systems composed of more than two polygons offer the potential to be more misleading than the binary system described here. If a single polygon space in a physical space has more than one exit, it can be exploited virually to produce different thresholds and openings etc. All this contributes to making the path users take more varied, and as such, we believe will make users more likely to forget about the limitations of their physical surroundings.

Lastly, the current 2D coordinate system used in the algorithm can be applied 3-dimensionally (spherically). This additional complexity offer scope to render the space even less decipherable and more immersive.

4.2 How to apply our algorithm?

Our algorithm is site specific and works with a real physical space to hack a user’s perception. Ideally, we would like this principle to be applied to other scenarios however we are unable to prefabricate a scene for an infinite number of contexts. Equally, it is not possible to ask every user to manually build their own digital model of the available space. Our long-term plan is to develop an integrated smart-application which allows users to customise and apply this algorithm to new environments.

First, this application will make use of space scanning technology, allowing users to scan their sites independently. The point cloud file obtained by the scan will be temporarily stored in the user’s device. Rudimentary scanning technology is already widely available, for example Project Tango (Google). It is likely that similar technologies will become ubiquitous and available on smart phones in the near future, enabling this feature.

Following scanning the space, users may determine which parts of the physical space will enter the virtual space according to their needs. This application will use our algorithm to generate the corresponding virtual space based on the shape of the user’s available physical space.

When the corresponding virtual space is generated, the user can choose to use either the mobile device version or the computer version. If they choose to use the computer and headset display method, the accumulated data will need to be uploaded to a cloud-based storage system for convenient access. Using a more powerful computer as opposed to a smart-device handset will deliver more effective and smoother results.

5. Conclusion and Evaluation

As the refined algorithm shows, it has great potential to change the way we perceive space. Although there have been some other significant algorithms before, ours is the only algorithm that combines natural locomotion, haptic feedback and space magnification.

The content of this article is deliberately conceptual in tone and not a detailed description of the algorithm itself, but rather its behaviour and how we approached, problematised and overcome the technical challenges encountered through developing it. We are keen to share these findings with a wider audience in order to disseminate the process involved and encourage conversations around the topic.

The ability to solve these problems to a large extent show the deficit between an original concept and its realisation. We applied our technical and domain-specific knowledge to the idea and successfully overcame several problems. In continuing this project, where problems prove more challenging we may need to develop a more appropriate strategy as highlighted in the introduction rather than simply applying our pooled knowledge.

This report, in part, serves as an archive of different problem-solving strategies. We hope that the information and techniques described here can be of use to those who share similar challenges and curiosities.


[1]. Jonassen, David H. Learning to solve problems: An instructional design guide. Vol. 6. John Wiley & Sons, 2004.

[2]. Smith, Mike U., ed. Toward a unified theory of problem solving: Views from the content domains. Routledge, 2012.

[3]. Newell, Allen, and Herbert Alexander Simon. Human problem solving. Vol. 104. No. 9. Englewood Cliffs, NJ: Prentice-Hall, 1972.

[4]. Singley, Mark K., and John Robert Anderson. The transfer of cognitive skill. No. 9. Harvard University Press, 1989.

[5]. Mayer, Richard E. Thinking, problem solving, cognition. WH Freeman/Times Books/Henry Holt & Co, 1992.

[6]. Frensch, Peter A., and Robert J. Sternberg. “Skill-related differences in game playing.” Complex problem solving: Principles and mechanisms (1991): 343-381.

[7]. Kluwe, Rainer H. “Single case studies and models of complex problem solving.” Complex problem solving: The European perspective (1995): 269-291.

[8]. Nescher, Thomas, Ying-Yin Huang, and Andreas Kunz. “Planning redirection techniques for optimal free walking experience using model predictive control.” 3D User Interfaces (3DUI), 2014 IEEE Symposium on. IEEE, 2014.

[9]. Azmandian, Mahdi, et al. “The redirected walking toolkit: a unified development platform for exploring large virtual environments.” Everyday Virtual Reality (WEVR), 2016 IEEE 2nd Workshop on. IEEE, 2016.

[10]. Williams, Betsy, et al. “Exploring large virtual environments with an HMD when physical space is limited.” Proceedings of the 4th symposium on Applied perception in graphics and visualization. ACM, 2007.

Submit a Comment