Integrating psychology principles to design a multi-sensory cognitive and performative challenge
This research will provide knowledge of the factors involved in designing multi-sensory training games to enhance memory and produce positive learning outcomes. This is achieved, by presenting methods, techniques and case studies that assess the knowledge and skills that learners enact to remain engaged in interactive games.
The following review outlines a theoretical framework of approaches and methodologies, that have informed my piece, MSTRMND.
MSTRMND is an interactive logic and ear-training game that combines echoic (auditory) and iconic (visual) memory with spatial perception to create a multi-sensory cognitive challenge. The system was designed to mimic natural multi-sensory human interaction by way of psychological principles, machine perception, communication, and interaction techniques. In the following analysis, I will argue the importance of incorporating psychological factors into multi-sensory integration.
We speak, move, gesture, and listen as part of our everyday human discourse and it is through this, that we combine these sensory inputs, with system outputs, to encourage humans to communicate and comprehend through a variety of different sensorium (Turk 2013). Our perceptions are based on the retrieval of stimuli from any of the five senses, which we call modes or modalities (Turk 2013). These modalities create a communication channel by way of a specified sensorium, such as touch or taste, where information is then transmitted and integrated from human-to-human and now, human-to-machine (Turk 2013).
The theoretical foundations presented in this review, have helped to inform the design and development of my piece, MSTRMND. This paper will examine how to integrate psychology principles, to design a multi-sensory, cognitive and performative challenge, that fits within the built environment.
MSTRMND is loosely based on the 1970’s board game Mastermind (fig. 1). The original game Mastermind is a code-breaking game for two players (Gierasimczuk 2013). The codemaker creates a secret code made up of different colored pegs and the codebreaker tries to match the code using logic and deduction (Gierasimczuk 2013). After each move, the codemaker gives feedback clues to their opponent, with black and white pegs, to help them decipher their sequence (Gierasimczuk 2013). Mastermind is a logic game that tests complex skills and strategies in “trials of experimentation and evaluation” (Gierasimczuk 2013).
How to Play:
MSTRMND consists of eight different colored light beams (fig.2), that have accompanying synesthetic sounds. The object of the game is to correctly repeat the computer’s randomly generated sequence of sound and light signals, increasing in complexity as the player progresses. MSTRMND is a single or multi-player logic game that tests complex skills and strategies in trials of experimentation and evaluation. Each color, encoded to a corresponding synesthetic sound, creates cross-sensory associations within the mind. Â Through this new embodied method, the players entire body becomes the controller of the game. This approach to more embodied sound and color associations is intended to enhance memory by incorporating visuo-spatial associations with multi-sensory processing.
MSTRMND begins by sequentially lighting from red to pink with accompanying synesthetic sound cues, players are then instructed auditorily when they hear, “Level One, Begin”. Players are instructed to repeat the signal they hear using any part of their body to set off the correct light beam. Once the first color has been guessed correctly, the player will advance to level two, where MSTRMND will add one more signal to the existing sequence. MSTRMND will continue lengthening the sequence of signals, when players continue to guess the colors and sounds in the correct order. However, if a player fails to repeat the sequence exactly, MSTRMND will respond by saying “Game Over, Play Again?”. But, if a player succeeds in beating all eight levels of the game, MSTRMND will indicate its defeat by playing a “winning” sound aloud and flashing white throughout the space.
MSTRMND was first exhibited and tested at Ugly Duck in South London, January 2020. This exhibition provided an experimental setup by which I could observe the diverse ways that people integrate their perceptions to problem solve.
Through extensive research and the analysis of my observations, I have outlined a comprehensive review of how people perform, and experience multi-sensory tasks. My research details how to design for the broadest range of users and contexts by understanding a user’s psychological characteristics (such as, cognitive abilities, motivation, level of experience, task applications, and physical attributes).
The goal for this installation was to create an ear training device that is widely accessible to any and all who may be interested. As a non-musically trained audiophile1, I have forever been plagued by this dichotomy of a love for music, but minimal understanding of sound. Thus, I wanted to design a visual learning system for an ear-training device, by providing average people like me, a new approach to learning sound. I set out to prove that developing an ear does not have to be so one-dimensional; my ambition was to invent a learning system that could be intuitively understood, by mapping it across multiple sensory modalities. Therefore, MSTRMND is intended to support a more natural approach to integrated cognition, perception and the development of multi-sensory integration.
Learning is an inherent ability we as humans possess, but it is not an easy concept to comprehend (Ertmer 1993). The study of how people learn has been defined and adapted by researchers and psychologists alike, for generations. While there still is no agreed upon definition, the unifying question between them remains: “Where does knowledge come from and how do people know?” (Ertmer 1993).
There are two main schools of thought when it comes to how people learn, the first being behavioral theory. Behavioral theory was the first recognized study of learning in the late 1800’s (Petri 1994). This theory suggested that learning begins when a proper response is generated, following the presentation of ÂÂspecific environmental stimuli (Ertmer 1993). However in recent years, psychologists and educators have started to move away from observed behavior, leaning more towards complex cognitive processes, like problem solving (Ertmer 1993).
Which brings me to the second theory of how people learn, cognition (Ertmer 1993). Cognitive theory states that individuals can acquire and store new information, which leads to new behavioral characteristics (Petri 1994). Over the past 70 years, theories in cognition have expanded the study of the human mind to look more closely at the inner mental activities, such as language, emotions, memory and motivation (Jovanovic 2011). Cognitive psychology focuses on the organization of new and meaningful information, with prior knowledge and memory (Schunk 1991).
Theories in cognition are widely considered more appropiate to explain complex forms of learning, like reasoning, problem solving and information processing (Schunk 1991). This is due to the emphasis cognitive theories put on mental structures, compared to behavioral models (Schunk 1991).
Cognitive processes regard the retention of information in an “unusually accessible state”, such as working memory (Cowan 2005). Â Working memory refers to the necessary systems afforded to retain information while performing complex cognitive tasks (Baddeley 2010).
Information Measurement Theory: Information theory was first defined in 1948 by Claude Shannon in his paper “A mathematical theory of communication” (Laming 2010). Information theory or “communication theory,” as he called it, regarded the measurement of transmitted information through a variety of different communication channels (Laming 2010). Shannon’s theory was first presented in a psychological context in George Miller’s paper, The Magical Number 7 + or — 2 (Laming 2010).
Correspondingly, Miller presented a theoretical analysis of information measurement that tested the limits of human information capacity (Luce 2003). His research has accounted for one of the most prolific articles on memory capacity in psychology today (Cowan 2015).
In the Magical Number 7 + or — 2, George Miller proposed ways to measure the amount of transmitted information from one place to another (Miller 1953). Thereby, “channel capacity” is known as the hypothesized upper limit at which information stimuli is transmitted, from input to output, before errors occur (Miller 1956). Most notably, he discovered that even with a variety of different tasks and sensorium tried, the channel capacity was around seven items [plus or minus two] (Miller 1956). Â
Miller determined that there were two ways he could increase the amount of input information provided; the first tested the rate at which the input information was given, while the latter tested the increase of possible alternatives1 (Miller 1956). We tend to infer that as we increase the amount of input information, the participant will begin to make more and more errors. To test this theory, we can employ tasks of absolute judgement (Miller 1956). Absolute judgement tasks involve supplying participants with one stimulus at a time, where response is to indicate the category to which each stimulus belongs, based on previous training (Cowan 2015). Miller found that in absolute judgment tasks of one-dimensional stimuli, participants can effectively determine only about five to nine items, or as he affirms, the Magical Number Seven (Miller 1956).
Irwin Pollack’s research in 1953 wanted to apply principles of absolute judgement tasks to describe a new approach to verbal learning (Miller 1956). One experiment he conducted (fig.3) regarded tones, asking participants to match them to previously established numerals (Pollack 1953). When a tone was presented, the participant was instructed to respond by providing a corresponding number based on previous training. Pollack found that people can transmit up to 2.5 items (1:6 ratio) before errors occur (Pollack 1953).
That said, there is evidence to suggest that musically trained people can identify upwards of fifty pitches (Miller 1956). During the exhibition at Ugly Duck, I observed this phenomenon with many of the musically trained participants who played the game. Clearly, classically trained ears give people an advantage to ear-training games, such as, MSTRMND. Particularly, musicians were observed most compelled to beat the game, despite how frustrating it often can become (that’s why they call it mental exercises, right?)
Span of Immediate Memory:
In the previous examples discussed, there are limits to information capacity, when users are presented with a single stimulus and instructed to define it immediately thereafter (Miller 1956). We can expand upon these procedures by incorporating multiple stimuli in succession and requiring that the participant retain information, till instructed to provide a response (Miller 1956). This process is what Miller calls, the span of immediate memory, and deals more with retention of multiple stimuli. Miller denotes that there is a clear operational similarity between the absolute judgement experiments and immediate memory (Miller 1956). Although both processes yield magnitudes of about seven, these studies impose different limitations on our ability to process information (Miller 1956). He affirms that “absolute judgment is limited by the amount of information, while immediate memory is limited to the number of items” (Miller 1956).
Furthermore, theories in cognition state that transfers2 are how information is effectively stored in memory (Ertmer 1993). Whereby, accurate learning exists only when a learner can apply such knowledge to a different context (Ertmer 1993). This brings me to recoding. Recoding is where user’s reorganize given sequences, by inputting them into smaller units or groups and reformatting (or transferring2) them to different contexts (Miller 1956). This is something we do subconsciously in our everyday lives, as we often rephrase conversations or ideas into our own words, to form more meaningful associations (Miller 1956). For example, when recoding is adapted to a sequence of unrelated words in an absolute judgement task, the channel capacity increases from about five to around fifteen items, when grammar and meaning are applied (Baddeley 2010). Basically, if we can group sequences of items into smaller groups, our transmitted capacity will increase, especially if we can recode in our own words (Miller 1956).
Clearly there is a limit to the span of immediate memory and absolute judgement, but we are not strictly limited to this capacity, as we have adapted and learned ways around this (Miller 1956) such as:
- Increasing the number of dimensions by utilizing differing stimuli
- Arranging a sequence into smaller groups by combining multiple absolute judgements together
We as humans, naturally perceive the world as a unified source of sensations, but we often forget that these perceptions are the combination of a variety of different unimodal interactions (Turk 2013). We use these senses, in unity with our experiences, to actively explore, understand and give explanation to our natural world (Turk 2013).
Furthermore, when we integrate unimodalities, we do not get a sum, but rather an integrated whole (Turk 2013). Our natural perception processes concern systems that use either multiple modalities or multiple channels, a.k.a. multimodal integration (Turk 2013).
Contrary to the natural multi-sensory human interactions we experience in our daily lives, HCI (human-computer interaction) has traditionally been focused on unimodal communication (Turk 2013). Albeit, while technically every interaction with a computer is multimodal (such as, sounds and light when the hard drive starts up), I am specifically talking about interactive computing with data input and output (Turk 2013). Nevertheless, now that microcontrollers and sensors are finally being made readily available and at affordable prices, more in-depth research into multimodal HCI can finally be realized.
Richard Bolt at the MIT Architecture Machine Group (now MIT Media Lab) is regarded as the first to design a device with these multimodalities in mind (Turk 2013). Bolt and his team built the Media Room, where they exhibited Put That There (fig. 4), a device that integrated gesture and voice inputs from a participant into natural interactions with a spatial wall display (Turk 2013). The importance of this piece was that none of the phrases could be interpreted solely from the gesture or the speech alone, they were required to be used in tandem (Turk 2013) This compounding of sensory inputs engendered a more natural mode of the interaction, from human-to-machine, and then machine back to human (Turk 2013).
Computer scientist and researcher, Leah Reeves, has compiled a set of guidelines for designing multimodal human-computer interactive devices (Reeves 2004). Her guidelines detail the importance of designing for the most accessible and broadest range of users and conditions (Turk 2013). The first being that the device should “maximize human cognitive and physical abilities” by utilizing specific advantages of each modality, to reduce a user’s memory load (Reeves 2004). For example, she states that the presentation of visual stimuli should be combined with a user’s input of spatial information for more effective retention (Reeves 2004).
The second principle is one that I would like to highlight, as it refers to the systems adaptability to the needs of different users and contexts (Reeves 2004). Which brings me to the importance of a common multimodal myth, just because a system is designed for multimodal use, does not mean it will be used, as such (Oviatt 1999). On the contrary, each user brings with them their own abilities, experiences and perceptions and applies them accordingly (Oviatt 1999). Studies have shown that users are more likely to combine such multimodalities and unimodalities to better fit their needs and preferences (Oviatt 1999). While exhibiting MSTRMND this past year, I observed how participants play and what strategies they use, as a basis for my research. The feedback I received from players expressed that there were a variety of different abilities and cues used to try and beat MSTRMND. Â One participant expressed that he was color blind, but that he was still heavily motivated to beat the game by applying his auditory skills and visuo-spatial understanding. This also seemed to be the case for many of the musicians who tried to defeat MSTRMND; some musicians were found not considering or attending to the colored light beams, which ended up being detrimental to their sequencing of the notes, as the notes were often correct but too early or too late in the sequence. As MSTRMND is an ear-training device, obviously auditory signals are the most prominent sensorium utilized. However, as proven by the defeat of some of the users, it cannot always exist on its own; one must apply their visual, auditory and spatial awareness to achieve a victory over MSTRMND.
Lastly, there is feedback and error prevention. Reeves maintains that systems should generate functions and responses that can be easily understood (Turk 2013). As previously mentioned, the original game of Mastermind, included feedback with black and white pegs to instruct the user with hints and clues — without them, the game would become nearly impossible and likely boring. While designing MSTRMND, I wanted to maintain the fundamentals of this feedback by providing users with clearly defined hints; such as, the light beams flashing white in-time with the sequence of sounds.
Ultimately, there are a variety of potential advantages of multimodal interfaces, which I have outlined here (Oviatt 2000):
- Support greater precision of spatial information (see visuo-spatial associations below)
- Provide users with alternative interaction methods
- Enhance error prevention
- Accommodate a wider range of users, tasks, and environments
Multimodal vs. Unimodal: Â
There have been a variety of empirical studies that prove that multimodal interactive systems are generally preferred by users over unimodal alternatives (Xiao 2002). Moreover, multimodal integration creates for more flexibility and often leads to the inclusion of adaption methods that better meet the needs of more diverse users (Xiao 2002).
It would be fair to assume that adding more modalities, would produce more errors. However, on the contrary, the channel capacity for multimodal tasks is higher than for unimodal (Samman 2004). According to a study performed at the University of Central Florida, multimodal channel capacity proved to be nearly three times that of the Magical Number 7, compared to its unimodal counterparts (Samman 2004). This implies that humans process faster and more efficiently when presented in multiple modalities (van Wassenhove 2005).
Cross-modal interactions occur when a perceptual experience is altered and then changes how another responds (O’Callaghan 2012). Cross-modality is different from multimodality in that it concerns the influence one sensory modality has on the perception of another.
Integrating perception with agency in interactive design includes all the senses and thus, needs to address interactions not only between human-to-machine but also cross-modally. A neurological condition known as synesthesia, provides a biological context of cross-modal interactions (Coen 2001). Synesthesia is a condition in which stimulation of one sense, affects another sense or thought process (Afra 2009).
We have previously established a multitude of benefits that multimodal systems provide for interactive and enhanced learning between humans and machines; however, we must take it a step further by developing an understanding of the interactions within our brains, or cross-wiring.
Synesthesia is a congenital condition of which the perceptual experience of one particular stimuli, evokes supplementary experiences or sensations (often cross sensorium) (Bor 2014). Synesthesia was originally thought of as an overactive imagination or metaphorical thinking, but it is now scientifically recognized as, “an involuntary or fixed sensory mode of perception, based on atypical cross-wiring between differing sensorium” (a.k.a., involuntary synesthetic perception) (Williams 2015).
Comparatively, people with synesthesia show signs of enhanced memory over proportionately matched controls (Rothen 2012). Although we are unable to determine an absolute answer as to why synesthetes exemplify enhanced memory, researchers have hypothesized that it is primarily due to a richer world experience, which in turn, generates enhancements to systems for encoding and memory (Rothen 2012).
It is important to note that no two synesthetes encounter the same sensations or experiences. For example, two people with audio/visual synesthesia may see entirely two different things while listening to the same music (Herman 2003).
One case study of a mathematical and linguistic savant, also known to have Asperger’s Syndrome and involuntary synesthetic perception, set the record for pi memorization at just over 20,000 digits, in about five hours (Rothen 2012). The subject attributed his enhanced memorization skills to his synesthetic condition, where he sees numbers as “three-dimensional landscapes” (Holden 2005). He details that each digit has its own color and shape and sometimes sound (Holden 2005). He then, treats such sequences visuo-spatially by putting objects, numbers, etc. in physical spaces in his mind (Hughes 2018). This is an extreme example that illustrates Miller’s research into spatial recognition and recoding.
Neuroscientist Vilayanur Ramachandran wanted to analyze the extent of this subject’s savant skills by testing his memorization skills of 100 digits in 3 minutes (when size is proportional) (Holden 2005). Not only was he able to memorize 68 digits and their locations in a short amount of time, but he also retained the same 68 digits when tested again 3 days later (Holden 2005). However, when the test was given again with different size digits to disrupt the subject’s visuo-spatial memory, he only retained 16 items (Holden 2005).
This case confirmed Charles Eriksen’s findings (on non-synesthetes) which ascertained that, when size, brightness and hue all vary together in absolute relativity, the transmitted information increases substantially from 2.7 items [when measured individually] to 4.1 (Miller 1956). This resulted in an increase in channel capacity, by increasing the range of input, devoid of an increase to amount of information (Miller 1956).
While there is some evidence that suggests that synesthesia is an inherited trait, it is not conclusively genetic (Bor 2014). Furthermore, the specific experiences that emerge are likely determined by individualized factors (Newell 2015). An alternative view of the neurological condition is that it is derived from “repeated exposure to combined perception at key developmental stages” (Bor 2014). Thus, it is possible that diversely intelligent children may create “semantic hooks” to enhance memory. These unconscious memory aids may then lead to synesthetic traits, which could appear engendered in adulthood (Bor 2014). Moreover, Bor and Rothen conducted a study that tested if phenomenological3 synesthetic experiences can be learned by training non-synesthetic controls (Bor 2014). The controls participated in an extensive training module that involved reading tasks and adaptive memory, designed to reinforce 13 specific letter-color associations (Bor 2014). Typically, genuine synesthetes have been proven to outperform controls in both learning and retention (when learning and retention are in direct proportion) (Rothen 2012). Conversely, the results demonstrated that with adequate training, controls were able to produce lower scores, which resulted in an increase to color consistency (fig. 5) (Bor 2014). It is important to note that when retested three months after subsequent training, controls synesthetic phenomenology3 had mostly faded from their memories (Bor 2014). Nevertheless, Bor and Rothen’s experimentation and analysis proved that it is possible to alter how humans experience the perceptual features of the world with adequate and consistent training (Bor 2014).
In conclusion, there is much evidence to suggest that synesthesia is linked to sources of enhanced memory and performance, as well as enhanced sensory processing. This link between enhanced memory and synesthetes is due to changes in cognitive systems where perceptions become intertwined (Rothen 2012). In addition, synesthesia can also be applied to supplementary learning outcomes in classical conditioning4 (Rothen 2018).
MSTRMND was heavily inspired by the concept of synesthesia and the belief that a mode of interaction is enhanced when presented in multiple modalities. This suggests that if a player applies only one of these sensory modalities, they would limit the amount they could remember and retain.
As proven with the case studies above, synesthetic ability is not just a fascinating phenomenon for the gifted few (Williams 2015) but can also be learnt with consistent training (Bor 2014). Therefore, MSTRMND, in its current state of eight colors, asks participants to carefully focus on the synesthesia-like effects of the system’s auditory and visual cues, to enhance memory and retention.
I want to briefly address some feedback I have received from some participants to facilitate a more accessible ear-training device, by turning MSTRMND into a mobile app. My response to these few has remained the same, visuo-spatial perception and understanding hold as key features in my design, as they play a crucial role in enhancing memory in multi-sensory applications (Turk 2013).
As we have already discussed with the synesthetic savant, visuo-spatial associations can provide another layer to working memory by way of multi-sensory integration. A study conducted at the University of Rome in 2011, found that visuo-spatial working memory was enhanced in multimodal interactions, compared to unimodal versions (Botta 2011). In this study, although neither the auditory, nor the visual cues, produced enhanced working memory outcomes when presented on their own, the combination of both cues prompted enhanced visuo-spatial working memory biases (Botta 2011). Botta attributes this to the multi-sensory integration processes that coexist between the two spatial cues (Botta 2011). Furthermore, this indicates the benefits of incorporating visuo-spatial cues, to affect cognitive performance in multi-sensory integration, over unimodal tasks of the same conditions (Botta 2011).
Multi-Intelligence Approach to Learning:
Now that we have set the groundwork for memory and sensory modality with a wide range of case studies and experiments, the real question is, why? In school, learning is primarily focused on math and linguistics, but this does not allow students with different talents or intelligences to expand their knowledge and interests (Jovanovic 2011). Conversely, interactive games like MSTRMND, Â can provide a multi-intelligence5 approach to learning by highlighting certain intellects (Jovanovic 2011).
Within the study of cognition, Deci and Ryan conceived what they called, the self-determination theory (SDT) (Ryan 2000). SDT is known as one of the most well established and referenced frameworks in motivation theory today (Mekler 2017). Their theory maintains that human development is driven by the fulfillment of psychological needs (Jovanovic 2011) for “competence, autonomy, and relatedness” (Deci 2000). Moreover, the inherent satisfaction of psychological needs is required for high degrees of motivation and is therefore derivative of what people find noteworthy or meaningful (Jovanovic 2011).Â Motivation is thus a byproduct of fun.
Denis and Jouvelet applied principles of SDT to instruct their definition of fun into two categories, pleasure and desire (Denis 2005). Combined, pleasure and desire create what they call, ludic tension – an inner state of immersion where a user is so engaged in an activity, they “lose track of time and the outside world” (fig. 6) (Denis 2005). In Denis and Jouvelet’s terms of “intrinsically motivated states” (Denis 2005), MSTRMND approaches this quality of discovery by providing the user with cerebral exercises that keeps players engaged, by way of attainable goals and an approachable interactive design aesthetic.
Motivation has been proven to increase effective cognitive strategies for enhanced memory, while amotivation “decreases memorization and personal development” (Jovanovic 2011). Over the past few years, developers have applied principles of motivation theory to traditional learning, by way of gamification6 (Buckley 2018). Gamification6 involves the application of “motivational affordances” to non-traditional game contexts, to foster more effective engagement (Mekler 2017). This is achieved by linking the SDT approach to motivation theory that incorporate game elements, to develop a framework for multi-sensory learning (Buckley 2018), like in the case of MSTRMND.
Fundamentally, games are constructed on a reward-based system, whereby, when you accomplish an achievement you will gain some sort of merit. Rewards have been proven to psychologically motivate a participant to continue playing due to the positive feedback received (Islam 2017). The more frequently a participant plays (practices) a game, the more proficient they become and thus, advance on to more difficult levels (fig.7) (Islam 2017).
Games must first set out clear goals and lessons that can be learned through practice, and then adjusting instructions and difficulty to best meet the needs and abilities of the user (Jovanovic 2011). This process generates a more personalized approach to learning, which results in more attentiveness and thus, more motivated learners. Furthermore, this methodology will prompt higher degrees of problem-solving skills, more strategic planning, time management, multitasking, and most importantly, the ability to adapt to ever changing scenarios (Jovanovic 2011).
If we are able to better understand our users and thus, profile them accordingly, we can further encourage adaptive learning through the use of multi-intelligence5 theories (Jovanovic 2011). Jovanovic presents a model categorizing these perspective profiles into different dominant contexts of learning interfaces (fig. 9) (Jovanovic 2011).
A cognitive study conducted by Jovanovic, aimed to measure the overall quality of learning games by testing the correlation between motivational factors of users and machines (Jovanovic 2011). The study divided a large class of students into smaller groups, whereby each group was to design a learning game that utilized motivational factors effectiveness, with overall quality of learning games (Jovanovic 2011). The results (fig. 8), found that groups who utilized the most motivational effects, produced higher overall quality games (Jovanovic 2011).
Multi-sensory integration has been proven to enhance learning by providing a multi-intelligence5 approach to perception and interaction (William 2015). Based on these findings, the gamification6 system I have applied to MSTRMND, nurtures ambitions based on Deci and Ryan’s self-determination theory (Jovanovic 2011). The SDT provides strategies for optimizing motivation (Ryan 2000) by fostering comfort, agency and capability (Williams 2015). By applying these motivational theories to learning paradigms, we can increase the naturalism of training modules and therefore, produce more effective learning (Bor 2014).
The following will detail specific multi-sensory precedent projects that have helped to inform the design of MSTRMND.
Gordan Pask was one of the lead inventors of the study of cybernetics and conversation theory (Haque 2007). Much like myself, his mechanical devices were driven by theoretical principles in cognitive psychology (Bird 2008). Pask considered himself a “mechanical philosopher,” as he was far more interested in how we understand, understanding, rather than the understanding of things them self (Bird 2008). What differentiates cybernetics, and more specifically Pask, from traditional psychology or engineering is his approach to learning and knowledge within the field of performance (Pickering 2010). Much of my design for MSTRMND has been informed by a combination of Pask’s mechanical philosophies with traditional theories in psychology.
Aesthetically Potent Environments:
As Pask states, “Man is prone to seek novelty in his environment and, having found a novel situation, learn how to control it”. In this context, control denotes as ‘explaining’ or ‘relating’ to an existing body of experience’ (Pask 1971). This methodology has informed what he defines as, aesthetically potent environments (Pask 1971).
Pask defines aesthetically potent environments using four main principles: The first being that the design, must offer sufficient variety — just enough to keep participants engaged, but not too much that it becomes chaotic and incomprehensible (Pask 1971).
The second and third principles go hand in hand; the second, specifies the necessary inclusion of recognizable concepts that can be learnt, by way of the third. The third states the importance of providing clear instructions or clues to guide participants through known and un-known concepts (Pask 1971). As it currently stands, MSTRMND is supported by hints and auditory cues that provide instructions and clues on how to play.
Finally, the fourth and most important being that [the design] “respond to the participant, by engaging him in conversation” and adapting accordingly. As previously discussed, training paradigms foster motivation and engage learners when systems are designed with adaptability in mind.
Regardless, Pask affirms that this final principle is not required to be an aesthetically potent environment, however, it fulfils what he believes to be truly novel works of art (Pask 1971). It is important to note that while MSTRMND does not currently address this fourth principle, my goal since conception has remained fixed on creating an adaptable piece that responds to users’ abilities and needs. The future design of MSTRMND will explore this method of adaptation and conversation further, by including an optional practice or “free-play” sequence. This practice or “free-play” sequence, will be devoid of all instructional cues, to foster a more personalized development to learning audio-visual associations [in the context of MSTRMND]. Ultimately, this would provide users with an option to follow the game’s directions or to learn in their own terms.
Additionally, I have been keen on encoding a intelligent9 system, that responds to the players abilities, by varying the speed at which signals are given. Adjusting the difficulty of game play will help to accommodate a larger scope of users — from musicians, to inexperienced music enthusiasts alike.
My initial motivation for designing MSTRMND was my fascination with the concept of synesthesia. Much of my interest in this subject began after researching Pask’s Musicolour: a performance system of colored lights that respond to audio input from a human performer (Haque 2007). The output of the lights was dependent upon frequency and rhythm from the performer (Pask 1971).
Pask, was also notably curious of synesthetic perception during this time, as the augmentation of light from sound, was rare in the 50’s (Pask 1971). He was aware that if a synesthetic experience was to appear, it would differ among performers and audience members (Pask 1971). Hence, his approach to encode the system with an adaptable learning function, intended to modify the auditory signals to an encoded visual language (Pask 1971).
Not only did Musicolour provide a bespoke visual language based on a human performer’s signals, it also had the ability to grow “bored” if the rhythm or frequency became too static (Haque 2007). Eventually, the system would stop illuminating the lights, to instruct the user to change up the input (Haque 2007).
Although Pask’s initial interest in the project was synesthesia, he quickly realized that the learning capabilities of the machine are what made it so unique (Pask 1971). The human performer “trained the machine and it played a game with him” but not in a static or predictable way (Pask 1971). This created an infinite loop where the machine responded to the human performer’s improvisations and unpredictability, by interpreting it and feeding it straight back to the performer (Pickering 2010). Which begs the question: Who’s controlling who?
The answer is that the human did not control the performance, nor did the machine; They worked in tandem to create an extension of one another, wherein which the machine learns and adapts with the human performer, while the human performer learns and adapts with the machine (Bird 2008). This is central to Pask’s theory that “man” is essentially adaptive, and machines can thus mimic human behavior (Pickering 2010).
Pask wanted to take his adaptive machining methodology and apply it to more universal systems by returning to his initial concentration: learning (Bird 2008). In the mid-1950’s, as technology began to develop at an alarming rate, so did most commercial businesses; ergo, the need for competent keyboard operators (Bird 2008). Pask created what he called the first “Self-Adaptive Keyboard Instructor” or SAKI (Haque 2007).
Essentially, SAKI was a training device that tested participants speed and accuracy in typing alphabetic and numeric symbols, with a 12-key keyboard (Haque 2007). The system guided participants with light cues (arranged in the same spatial layout as the keyboard) to press relative keys to encode data (fig. 10) (Bird 2008). Initially, items were randomly presented at a slow and uniform rate, with corresponding lights remaining on for a long period of time (Bird 2008). The machine stored the operator’s response time for each item till all four exercise lines had been achieved (Bird 2008). SAKI provides an unequivocal answer to Pask’s four principles of aesthetically potent environments, by varying the difficulty of tasks for each item to best meet the needs and capabilities of user’s (Bird 2008).
Pask’s aim for SAKI was to mimic the possible relationship between teacher and student — wherein which, a human teacher responds directly to a student’s proficiencies, by focusing on certain weaker aspects of measured areas (Haque 2007). The machine not only responds to the student’s current input, but also adapts its responses based on prior interactions (Haque 2007). Much like Pask’s application of Musicolour, the machine responds to the student, while simultaneously, the student responds to the machine (Haque 2007).
Not only is the machine being treated as a black-box,9 but the user is as well (Bird 2008). The machine then tries to imitate the non-stable characteristics to create a relationship between itself and the user (Bird 2008). The feedback received is constantly updating and adjusting variables to reach a desired goal (Bird 2008). Basically, the system is conditioning you, and conversely you are conditioning it (Bird 2008).
Pask concluded that people are motivated by the desire to reach stable interactions with machines, rather than to reach any particular performance goal (Bird 2008).
Chris Creed & Paul Newland, MEDIATE:
MEDIATE (Multi-sensory Environment Design for an Interface between Autistic and Typical Expressiveness) was a multi-sensory environment designed for children on the autism spectrum with limited verbal and social skills (Williams 2015). Psychologically, it centered around agency in a multi-sensory environment, devoid of any social context (Williams 2015).
Attributed to their lack of social skills, children with autism often struggle with the experience of control, as their world is often chaotic and unpredictable (Williams 2015). MEDIATE was different in its design because it provided children on the spectrum with a place to interact physically with the world through embodied10 learning, by affording them control of their environment, behavior and expressiveness (Williams 2015).
MEDIATE, had a wide array of visual, audio and tactile interfaces with organic and active materials and shapes (Williams 2015). The system was designed to be adaptive by creating individual sensory profiles based on the behaviors of its users (Williams 2015).
While MEDIATE was designed specifically for children with autism, it was also an opportunity for parents and caregivers to observe their child’s behavior and sensory preferences (Williams 2015). Notably, one mother observed as her son with Asperger’s Syndrome became captivated by the TuneFork; the more he played, the more complex the interaction level became (Williams 2015). Eventually, he was able to change the color of the screens by tapping the TuneFork and was observed continuously selecting a purple hue (Williams 2015). His mother expressed that she believed this was a form of sensory expression, so she proceeded to paint his bedroom walls purple (Williams 2015). Subsequently, the child behaved more calmly at home and was able to sleep through the night for the first time in years (Williams 2015).
MEDIATE provides a foundation that establishes an adaptive and embodied10, multi-sensory approach to reach disadvantaged groups. Moreover, my approach for MSTRMND incorporates many of these features by encouraging players to use their entire bodies to become controllers of their own game. Furthermore, this approach to more personalized and embodied learning is intended to enhance memory on a more individualized level. The future of MSTRMND will include an intelligent9 system, much like that of MEDIATE‘s, that responds to its users and then adapts accordingly to best meet the users wants and needs.
In this paper, I have outlined a theoretical framework, comprised of collated research from information measurement, developments in multi-sensory integration, and psychological proficiencies and motivations, to provide a new approach to learning, characterized by perceptual content. My analysis provides a psychologically motivated foundation to integrating a multi-sensory approach to cognitive interactive games.
My research has afforded MSTRMND with methodologies and procedures that asses user’s psychological characteristics, abilities, and preferences, to better identify how people interact and make sense of the world. As previously discussed, identifying more individualized user profiles has been proven to enhance memory and retention by motivating learners on a more personal level. That said, in order to develop a more personalized approach to multi-sensory learning, MSTRMND will need to explore techniques of adaptive processing, based on empirical research.
Future of MSTRMND:
While MSTRMND has employed a rich theoretical framework to instruct its design, future empirical studies of user’s psychological proficiencies and incentives, need to be tried. For example, I could test the channel capacity of musically trained vs non-musically trained participants. Thereby, adopting these results to instruct the future design of MSTRMND.
My aim is to implement a set of guidelines to the system that allow for as much self-organization, as possible, such as, the aforementioned: practice and “free-play” modes (see SAKI, MEDIATE). Furthermore, the importance of conversational adaptability between machine and user is necessary to further enhance memory and retention. This will be achieved by providing a more individualized approach to learning by varying the speed of signals, based on a player’s skillsets. Therefore, the system can potentially become an extension of the user’s body, when the complexity of interaction is increased, through repetitious and consistent training challenges.
Furthermore, instead of capping the game at just 8 levels, my objective is to create a version of MSTRMND with no winning limit. This will be supported by a “high-score board” (much like traditional arcade games) which is intended to increase players competition and motivation.
In conclusion, this review can be used as a basis for collaborative development within the field of interactive multi-sensory game design. Further empirical studies will need to be conducted to develop a language of information capacity to inform the future of MSTRMND. Â Ultimately, I will continue exploring the cognitive attributes of multi-sensory engagement by simplifying and advancing the design processes of mechanisms for interactive learning games.
- Alternatives: being different options for decision making (Miller 1956)
- Transfers: when a learner understands how to apply knowledge in a different context in cognitive theory (Ertmer 1993)
- Phenomenology: the study of phenomena (essences) in philosophy, separate from the natural world (Meleau-Ponty 1956)
- Classical Conditioning: naturally occurring stimulus producing an unconditioned response (Skinner 1974)
- Multi-Intelligence: traditional approaches to learning focus mainly on mathematics and linguistics, however multi-intelligence theory affirms there are 8 ways in which humans learn so treating everyone the same in regard to intellect is wildly unfair (Lynch 1995)
- Gamification: applying game design elements to nongame contexts to enact effective motivating behavior (Buckley 2018)
- Linear Regression Model: linear approach to the relationship between a dependent and independent variable (Wikipedia 2019)
- Intelligent: in regard to, robots that can be programed to take actions or make choices based on input from sensors (Intelligent Robot 2019)
- Black-boxes: first coined by Ashby concerns intelligent computing, of which a device or system appears to be intelligent8 but we have no concept of the inner mechanisms (Glanville 1982)
- Embodied: be an expression of or give a tangible or visible form to (an idea, quality, or feeling) (Stolz 2015)
- Twister: aÂ game produced played on a large plastic mat with a row of 6×6 colorsÂ is spread on the floor and players are given instructions of where to put body parts to get on be on the correct color (Wikipedia 2019)
Afra, P., Funke, M., & Matsuo, F. (2009). Acquired auditory-visual synesthestia: A window to early cross-modal sensory interactions. Psychology Research and Behavior Management, 2, 31—37. https://doi.org/10.2147/PRBM.S4481
Baddeley, A. (2010). Working memory.Â Current biology,Â 20(4), R136-R140.
Beer, S. (1960). Cybernetics and management.
Berger, C. C., & Ehrsson, H. H. (2013). Mental imagery changes multisensory perception. Current Biology, 23(14), 1367—1372. https://doi.org/10.1016/j.cub.2013.06.012
Bird, J., & Di Paolo, E. (2013). Gordon Pask His Maverick Machines. The Mechanical Mind in History, 185—211. https://doi.org/10.7551/mitpress/9780262083775.003.0008
Bor, D., Rothen, N., Schwartzman, D. J., Clayton, S., & Seth, A. K. (2014). Adults can be trained to acquire synesthetic experiences. Scientific Reports, 4. https://doi.org/10.1038/srep07089
Botta, F., Santangelo, V., Raffone, A., Sanabria, D., LupiÃ¡Ã±ez, J., & Belardinelli, M. O. (2011). Multisensory Integration Affects Visuo-Spatial Working Memory. Journal of Experimental Psychology: Human Perception and Performance, 37(4), 1099—1109. https://doi.org/10.1037/a0023513
Brankaert, R., Ouden, E. Den, Buchenau, M., Suri, J. F., de Valk, L., Bekker, T., … Bozarth, M. A. (2009). Experiential Probes: probing for emerging behavior patterns in everyday life. International Journal of Design, 9(1), 2880—2888. https://doi.org/10.1017/S1041610297004006
Buckley, J., DeWille, T., Exton, C., Exton, G., & Murray, L. (2018). A Gamification—Motivation Design Framework for Educational Software Developers. Journal of Educational Technology Systems, 47(1), 101—127. https://doi.org/10.1177/0047239518783153
Caschera, M. C., D’Ulizia, A., Ferri, F., & Grifoni, P. (2012). Towards evolutionary multimodal interaction. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7567 LNCS, 608—616. https://doi.org/10.1007/978-3-642-33618-8_80
Coen, M. H. (2001). Multimodal integration – A biological view. IJCAI International Joint Conference on Artificial Intelligence, 1417—1424.
Connelly, L. M. (2010). What is phenomenology? Medsurg Nursing”¯: Official Journal of the Academy of Medical-Surgical Nurses, 19(2), 127—128.
Covaci, A., Ghinea, G., Lin, C. H., Huang, S. H., & Shih, J. L. (2018). Multisensory games-based learning – lessons learnt from olfactory enhancement of a digital board game. Multimedia Tools and Applications, 77(16), 21245—21263. https://doi.org/10.1007/s11042-017-5459-2
Cowan, N. (2015). George Miller’s magical number of immediate memory in retrospect: Observations on the faltering progression of science. Psychological Review, 122(3), 536—541. https://doi.org/10.1037/a0039035
Denis, G., & Jouvelot, P. (2005). Motivation-driven educational game design: Applying best practices to music education. ACM International Conference Proceeding Series, 265, 462—465. https://doi.org/10.1145/1178477.1178581
Driver, J., & Spence, C. (1998). Cross-modal links in spatial attention. Philosophical Transactions of the Royal Society B: Biological Sciences, 353(1373), 1319—1331. https://doi.org/10.1098/rstb.1998.0286
Eriksen, C. W., & Hake, H. W. (1955). Accuracy of Discrimination l. Journal of Experimental Psychology, 50(3).
Ertmer, P. A., & Newby, T. J. (1993). Behaviorism, Cognitivism, Constructivism: Comparing Critical Features from an Instructional Design Perspective. Performance Improvement Quarterly, 6(4), 50—72. https://doi.org/10.1111/j.1937-8327.1993.tb00605.x
Focardi, R., & Luccio, F. L. (2012). Guessing Bank PINs by Winning a Mastermind Game. Theory of Computing Systems, 50(1), 52—71. https://doi.org/10.1007/s00224-011-9340-9
Friedrich, J., Becker, M., Kramer, F., Wirth, M., & Schneider, M. (2019). Incentive design and gamification for knowledge management. Journal of Business Research, (February). https://doi.org/10.1016/j.jbusres.2019.02.009
Galati, G., Pelle, G., Berthoz, A., & Committeri, G. (2010). Multiple reference frames used by the human brain for spatial perception and memory. Experimental Brain Research, 206(2), 109—120. https://doi.org/10.1007/s00221-010-2168-8
Gierasimczuk, N., Van der Maas, H. L., & Raijmakers, M. E. (2013). An analytic tableaux model for deductive mastermind empirically tested with a massively used online learning system.Â Journal of Logic, Language and Information,Â 22(3), 297-314.
Glanville, R. (1982). Inside every white box there are two black boxes trying to get out.Â Behavioral Science,Â 27(1), 1-11.
Glanville, R. (2009). A (Cybernetic) Musing: Design and Cybernetics. Cybernetics and Human Knowing, 16(3), 175. Retrieved from http://www.nomads.usp.br/virus/virus03/PDF/invited/2_en.pdf
Haque, U. (2007). The architectural relevance of Gordon Pask.Â Architectural Design,Â 77(4), 54-61.
Herman, S. (2003). Synesthesia.Â Global Cosmetic Industry,Â 171(4), 54-54.
Herring, S. R., & Rights, A. (2008). Working Memory Working Memory. ReCALL, 20(4), 1—16.
Holden, C. (2005). Colored Memory.Â Science,Â 308(5721), 492.
Hughes, J. E. A., Gruffydd, E., Simner, J., & Ward, J. (2019). Synaesthetes show advantages in savant skill acquisition: Training calendar calculation in sequence-space synaesthesia. Cortex, 113, 67—82. https://doi.org/10.1016/j.cortex.2018.11.023
Islam, A. (2017). Cross-Modal Computer Games as an Interactive Learning Medium. (April), 82—90. https://doi.org/10.20472/iac.2017.030.017
Janich, P. (2018).Â What is information?Â (Vol. 55). U of Minnesota Press.
Jovanovic, M., Starcevic, D., Minovic, M., & Stavljanin, V. (2011). Motivation and multimodal interaction in model-driven educational game design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 41(4), 817—824. https://doi.org/10.1109/TSMCA.2011.2132711
Laming, D. (2010). Statistical information and uncertainty: A critique of applications in experimental psychology.Â Entropy,Â 12(4), 720-771.
Luce, R. D. (2003). Whatever Happened to Information Theory in Psychology? Review of General Psychology, 7(2), 183—188. https://doi.org/10.1037/1089-26184.108.40.206
Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279—284. https://doi.org/10.1038/36846
MacLeod, C. M., & Risko, E. F. (2017). Radical Cognitivism? Distinguishing Behavior from Thought. Journal of Applied Research in Memory and Cognition, 6(1), 22—26. https://doi.org/10.1016/j.jarmac.2016.11.001
Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746—748. https://doi.org/10.1038/264746a0
Mekler, E. D., BrÃ¼hlmann, F., Tuch, A. N., & Opwis, K. (2017). Towards understanding the effects of individual gamification elements on intrinsic motivation and performance. Computers in Human Behavior, 71, 525—534. https://doi.org/10.1016/j.chb.2015.08.048
Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 81—97. https://doi.org/10.1037/h0043158
Newell, F. N., & Mitchell, K. J. (2016). Multisensory integration and cross-modal learning in synaesthesia: A unifying model. Neuropsychologia, 88, 140—150. https://doi.org/10.1016/j.neuropsychologia.2015.07.026
O’Callaghan, C. (2012). Perception and Multimodality. The Oxford Handbook of Philosophy of Cognitive Science, (September), 1—28. https://doi.org/10.1093/oxfordhb/9780195309799.013.0005
Oviatt, S. (1999). Ten myths of multimodal interaction.Â Communications of the ACM,Â 42(11), 74-81.
Oviatt, S., & Cohen, P. (2000). What Comes Naturally That Process. Communications of the ACM, 43(3), 45—53.
Oviatt, S. (2003). Advances in robust multimodal interface design.Â IEEE computer graphics and applications, (5), 62-68.
Pask, G. (1971). A Comment, a Case History and a Plan. Cybernetics, Art and Ideas, 76—99. Retrieved from http://pangaro.com/pask/Pask Cybernetic Serendipity Musicolour and Colloquy of Mobiles.pdf
Pask, G. (1975).Â The cybernetics of human learning and performance: A guide to theory and research. Hutchinson.
Pask, G., Elisabeth, T., & York, A. (1976). Conversation Theory – Applications in Education and Epistemology.
Petri, H. L., & Mishkin, M. (1994). Behaviorism, cognitivism and the neuropsychology of memory. Am.Sci., 82(1), 30—37.
Pickering, A. (2010).Â The cybernetic brain: Sketches of another future. University of Chicago Press.
Pollack, I. (1953). Assimilation of sequentially encoded information.Â The American journal of psychology,Â 66(3), 421-435.
Pollack, I. (1954a). The Information of Elementary Auditory Displays Published by the Acoustical Society of America The Information of Elementary Auditory Displays. 449(1952). https://doi.org/10.1121/1.1917486
Pollack, I., & Ficks, L. (1954b). Information of elementary multidimensional auditory displays.Â The Journal of the Acoustical Society of America,Â 26(2), 155-158.
Porta, M.Â information theory.Â InÂ Last, J.Â (Ed.), A Dictionary of Public Health.Â : Oxford University Press. Retrieved 16 Sep. 2019, from https://www.oxfordreference.com/view/10.1093/acref/9780191844386.001.0001/acref-9780191844386-e-2277.
Reeves, L. M., Lai, J., Larson, J. A., Oviatt, S., Balaji, T. S., Buisine, S., … & McTear, M. (2004). Guidelines for multimodal user interface design.Â Communications of the ACM,Â 47(1), 57-59.
Rothen, N., Meier, B., & Ward, J. (2012). Enhanced memory ability: Insights from synaesthesia. Neuroscience and Biobehavioral Reviews, 36(8), 1952—1963. https://doi.org/10.1016/j.neubiorev.2012.05.004
Rothen, N., Seth, A. K., & Ward, J. (2018). Synesthesia improves sensory memory, when perceptual awareness is high. Vision Research, 153(September), 1—6. https://doi.org/10.1016/j.visres.2018.09.002
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being.Â American psychologist,Â 55(1), 68.
Sagiv, N., Simner, J., Collins, J., Butterworth, B., & Ward, J. (2006). What is the relationship between synaesthesia and visuo-spatial number forms? Cognition, 101(1), 114—128. https://doi.org/10.1016/j.cognition.2005.09.004
Samman, S. N., Stanney, K. M., Dalton, J., Ahmad, A. M., Bowers, C., & Sims, V. (2004). Multimodal Interaction: Multi-Capacity Processing Beyond 7 +/âˆ’ 2. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(3), 386—390. https://doi.org/10.1177/154193120404800324
Schunk, D. H. (1991).Â Learning theories an educational perspective. Macmillan.
Siegel, J. A., & Siegel, W. (1972). Absolute judgment and paired-associate learning: Kissing cousins or identical twins?.Â Psychological Review,Â 79(4), 300.
Turk, M. (2014). Multimodal interaction: A review. Pattern Recognition Letters, 36(1), 189—195. https://doi.org/10.1016/j.patrec.2013.07.003
Van Wassenhove, V., Grant, K. W., & Poeppel, D. (2005). Visual speech speeds up the neural processing of auditory speech.Â Proceedings of the National Academy of Sciences,Â 102(4), 1181-1186.
Williams, R. (2015). Synesthesia: From Cross-Modal to Modality-Free Learning and Knowledge.Â Leonardo,Â 48(1), 48-15.
Xiao, B., Girand, C., & Oviatt, S. (2002). Multimodal integration patterns in children. 7th International Conference on Spoken Language Processing, ICSLP 2002, (May), 629—632.
Intelligent Robot. (2019). Retrieved September 2019, from https:/encyclopedia2.thefreedictionary.com/intelligent+robot
Karch, M. (2019). A Beginner’s Guide to Apps. Retrieved September 2019, from https://www.lifewire.com/what-are-apps-1616114
Lynch, W. M. (1995). Multiple Intelligences. Teaching Education, 7(1), 155—157. https://doi.org/10.1080/1047621950070122
Merleau-Ponty, M., & Bannan, J. F. (1956). What is phenomenology?. CrossCurrents, 6(1) 59-70.
Retro Mastermind Game. (2019). Retrieved September 2019, from https://intl.target.com/p/retro-mastermind-game/-/A-17073123
Skinner, B. (1974).Â About behaviourism / B.F. Skinner.Â London: Cape.
Stolz, S. A. (2015). Embodied Learning. Educational Philosophy and Theory, 47(5), 474—487. https://doi.org/10.1080/00131857.2013.879694
Wikipedia contributors. (2019, September 7). Linear regression. InÂ Wikipedia, The Free Encyclopedia. Retrieved 13:09, September 16, 2019, fromÂ https://en.wikipedia.org/w/index.php?title=Linear_regression&oldid=914519432
Wikipedia contributors. (2019, July 18). Twister (game). InÂ Wikipedia, The Free Encyclopedia. Retrieved 15:25, September 16, 2019, fromÂ https://en.wikipedia.org/w/index.php?title=Twister_(game)&oldid=906838734
Wolz, S. H., & Carbon, C.-C. (2015). Images in Art and Science and the Quest Image Science. Leonardo, 48(1), 74—75. https://doi.org/10.1162/Leon
Figure 1: Retro Mastermind Game. (2019). Retrieved September 2019, from https://intl.target.com/p/retro-mastermind-game/-/A-17073123
Figure 2: Yagilowich, A., (2019). MSTRMND, Colored beams setup, JPEG
Figure 3: Pollack, I., & Fricks, L. (1945b). Information of elementary multidimensional auditory displays. The Journal of the Acoustical Society of America, 26(2), 155-158
Figure 4: Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1980, 262—270. https://doi.org/10.1145/800250.807503
Figure 5: Bor, D., Rothen, N., Schwartzman, D.J., Clayton S., & Seth, A.K. (2014). Adults can be trained to acquire synesthetic experiences. Scientific Reports, 4. https://doi.org/10.1038/srep07089
Figure 6: Ryanm R.M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1), 68
Figure 7: Islam, A. (2017). Cross-Modal Computer Games as an Interactive Medium. (April), 82-90. https://doi.org/10.20472/iac.2017.030.017
Figure 8: : Jovanovic, M., Starcevic, D., Minovic, M., & Stavljanin, V. (2011). Motivation and multimodal interaction in model-driven educational game design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 41(4), 817—824. https://doi.org/10.1109/TSMCA.2011.2132711
Figure 9: Jovanovic, M., Starcevic, D., Minovic, M., & Stavljanin, V. (2011). Motivation and multimodal interaction in model-driven educational game design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 41(4), 817—824. https://doi.org/10.1109/TSMCA.2011.2132711
Figure 10: Watters, A. (2019). Gordon Pask’s Adaptive Teaching Machines. Retrieved 16 September 2019, from http://hackeducation.com/2015/03/28/pask