Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top


No Comments

Integrating psychology principles to design a multi-sensory cognitive and performative challenge

Integrating psychology principles to design a multi-sensory cognitive and performative challenge

This research will provide knowledge of the factors involved in designing multi-sensory training games to enhance memory and produce positive learning outcomes. This is achieved, by presenting methods, techniques and case studies that assess the knowledge and skills that learners enact to remain engaged and motivated in interactive games. The following review outlines a theoretical framework of approaches and methodologies, that have informed my piece, Mastermind 3.0.


Mastermind 3.0 is an interactive logic and ear-training game that combines echoic (auditory) and iconic (visual) memory with spatial perception to create a multi-sensory cognitive challenge. The system was designed to mimic natural multi-sensory human interaction by way of psychological principles, machine perception, communication, and interaction techniques. In the following analysis, I will argue the importance of incorporating psychological factors into multi-sensory integration.

We speak, move, gesture, and listen as part of our everyday human discourse and it is through this, that we combine these sensory inputs, with system outputs, to encourage humans to communicate and comprehend through a variety of different sensorium (Turk 2013). Our perceptions are based on the retrieval of stimuli from any of the five senses, which we call modes or modalities (Turk 2013). These modalities create a communication channel by way of a specified sensorium, such as touch or taste, where information is then transmitted and integrated from human-to-human and now, human-to-machine (Turk 2013).

The theoretical foundations presented in this review, have helped to inform the design and development of my piece, Mastermind 3.0. This paper will examine how to integrate psychology principles, to design a multi-sensory, cognitive and performative challenge, that fits within the built environment.

Mastermind 3.0:

Mastermind 3.0 is loosely based on the 1970’s board game Mastermind (fig. 1). The original game Mastermind is a code-breaking game for two players (Gierasimczuk 2013). The codemaker creates a secret code made up of different colored pegs and the codebreaker tries to match the code using logic and deduction (Gierasimczuk 2013). After each move, the codemaker gives feedback clues to their opponent, with black and white pegs, to help them decipher their sequence (Gierasimczuk 2013). Mastermind is a logic game that tests complex skills and strategies in “trials of experimentation and evaluation” (Gierasimczuk 2013).

Figure 1: Original Mastermind board game

How to Play:

Mastermind 3.0 consists of four incremental light rings that sit within the built environment and respond to input from a MIDI keyboard’s pentatonic or black keys. The objective of the game is to correctly repeat a computer’s randomly generated growing sequence of sound and light signals in 25 guesses or less. For this adaptation of Mastermind, each note of the MIDI’s 15-pentatonic keys, has been encoded to an RGB color, intended to create cross sensory associations within the mind (fig. 2 and 3).

Mastermind 3.0 will indicate the start of the game when the “start” key is pressed, by flashing a white light on the first ring with an accompanying sound (fig.4). Players are instructed to repeat this signal they hear by pressing the correct note on the keyboard. If incorrect, players are encouraged to press the “hint” or “reset” key as frequently as needed throughout the game. The “hint” key, much like the black and white pegs in the original game, provides players with feedback by replaying the sounds of the target sequence with accompanying white lights [on the rings] to indicate which level the player is on. The RGB color associations are to be used in accordance with the hints to determine if notes in the sequence are to be higher or lower than a player’s guess. The colors are arranged on the keyboard in a rainbow sequence of ascending order, from red (lowest notes) to pink (highest notes)   (fig. 2). This approach to sound and color is intended to enhance memory by incorporating visuo-spatial associations with multi-sensory processing.

Figure 2: MIDI keyboard encoded to RGB colors

Once the first note has been guessed correctly, the player will advance to level 2, where Mastermind 3.0 will add one more signal to the sequence. As the system randomizes once more, it creates a following sequence totally independent of level 1. Mastermind 3.0 will continue lengthening the sequence of signals, when players continue to guess the notes in the correct order. However, if a player fails to repeat the sequence exactly or exceeds the number of guesses, Mastermind 3.0 will respond with a “buzz” sound and players will be instructed to start again at level 1. On the other hand, if a player succeeds in beating all four levels of the game, Mastermind 3.0 will indicate its defeat by playing a “winning” sound aloud through the speaker (fig.4).


Mastermind 3.0 was first exhibited and tested at the Interactive Architecture Lab’s Summer Show in 2019, and then again, a few weeks later, at the Samsung Experience store in Kings Cross. These exhibitions provided an experimental setup by which I could observe the diverse ways that people integrate their perceptions to problem solve.

Through extensive research and the analysis of observations at these two open exhibitions, I have outlined a comprehensive review of how people perform, and experience multi-sensory tasks. My research details how to design for the broadest range of users and contexts by understanding a user’s psychological characteristics (such as, cognitive abilities, motivation, level of experience, task applications, and physical attributes).

Figure 3: RGB color associations for 15-key MIDI keyboard

The goal for this installation was to create an ear training device that is widely accessible to any and all who may be interested. As a non-musically trained audiophile1, I have forever been plagued by this dichotomy of a love for music, but minimal understanding of sound. Thus, I wanted to design a visual learning system for an ear-training device, by providing average people like me, a new approach to learning sound. I set out to prove that developing an ear does not have to be so one-dimensional; my ambition was to invent a learning system that could be intuitively understood, by mapping it across multiple sensory modalities. Therefore, Mastermind 3.0 is intended to support a more natural approach to integrated cognition, perception and the development of multi-sensory integration.

Figure 4: Mastermind 3.0 setup

Cognitive Psychology:

Learning is an inherent ability we as humans possess, but it is not an easy concept to comprehend (Ertmer 1993). The study of how people learn has been defined and adapted by researchers and psychologists alike, for generations. While there still is no agreed upon definition, the unifying question between them remains: “Where does knowledge come from and how do people know?” (Ertmer 1993).

There are two main schools of thought when it comes to how people learn, the first being behavioral theory. Behavioral theory was the first recognized study of learning in the late 1800’s (Petri 1994). This theory suggested that learning begins when a proper response is generated, following the presentation of ­­specific environmental stimuli (Ertmer 1993). However in recent years, psychologists and educators have started to move away from observed behavior, leaning more towards complex cognitive processes, like problem solving (Ertmer 1993).

Which brings me to the second theory of how people learn, cognition (Ertmer 1993). Cognitive theory states that individuals can acquire and store new information, which leads to new behavioral characteristics (Petri 1994). Over the past 70 years, theories in cognition have expanded the study of the human mind to look more closely at the inner mental activities, such as language, emotions, memory and motivation (Jovanovic 2011). Cognitive psychology focuses on the organization of new and meaningful information, with prior knowledge and memory (Schunk 1991).

Theories in cognition are widely considered more appropiate to explain complex forms of learning, like reasoning, problem solving and information processing (Schunk 1991). This is due to the emphasis cognitive theories put on mental structures, compared to behavioral models (Schunk 1991).

Cognitive processes regard the retention of information in an “unusually accessible state”, such as working memory (Cowan 2005).  Working memory refers to the necessary systems afforded to retain information while performing complex cognitive tasks (Baddeley 2010).


Information Measurement Theory:

Information theory was first defined in 1948 by Claude Shannon in his paper “A mathematical theory of communication” (Laming 2010). Information theory or “communication theory,” as he called it, regarded the measurement of transmitted information through a variety of different communication channels (Laming 2010). Shannon’s theory was first presented in a psychological context in George Miller’s paper, The Magical Number 7 + or – 2  (Laming 2010).

Correspondingly, Miller presented a theoretical analysis of information measurement that tested the limits of human information capacity (Luce 2003). His research has accounted for one of the most prolific articles on memory capacity in psychology today (Cowan 2015).

Channel Capacity:

In the Magical Number 7 + or – 2, George Miller proposed ways to measure the amount of transmitted information from one place to another (Miller 1953). Thereby, “channel capacity” is known as the hypothesized upper limit at which information stimuli is transmitted, from input to output, before errors occur (Miller 1956). Most notably, he discovered that even with a variety of different tasks and sensorium tried, the channel capacity was around seven items [plus or minus two] (Miller 1956).  

Figure 5: Amount of transmitted information in absolute judgement tasks testing auditory pitch

Absolute Judgement:

Miller determined that there were two ways he could increase the amount of input information provided; the first tested the rate at which the input information was given, while the latter tested the increase of possible alternatives1 (Miller 1956). We tend to infer that as we increase the amount of input information, the participant will begin to make more and more errors. To test this theory, we can employ tasks of absolute judgement (Miller 1956). Absolute judgement tasks involve supplying participants with one stimulus at a time, where response is to indicate the category to which each stimulus belongs, based on previous training (Cowan 2015). Miller found that in absolute judgment tasks of one-dimensional stimuli, participants can effectively determine only about five to nine items, or as he affirms, the Magical Number Seven (Miller 1956).

Irwin Pollack’s research in 1953 wanted to apply principles of absolute judgement tasks to describe a new approach to verbal learning (Miller 1956). One experiment he conducted (fig.5) regarded tones, asking participants to match them to previously established numerals (Pollack 1953). When a tone was presented, the participant was instructed to respond by providing a corresponding number based on previous training. Pollack found that people can transmit up to 2.5 items (1:6 ratio) before errors occur (Pollack 1953).

That said, there is evidence to suggest that musically trained people can identify upwards of fifty pitches (Miller 1956). During the two exhibitions Mastermind 3.0 was presented in, I observed this phenomenon with many of the musically trained participants who played the game. Clearly, classically trained ears give people an advantage to ear-training games, such as, Mastermind 3.0. Particularly, musicians were observed most compelled to beat the game, despite how frustrating it often can become (that’s why they call it mental exercises, right?)

Span of Immediate Memory:

In the previous examples discussed, there are limits to information capacity, when users are presented with a single stimulus and instructed to define it immediately thereafter (Miller 1956). We can expand upon these procedures by incorporating multiple stimuli in succession and requiring that the participant retain information, till instructed to provide a response (Miller 1956). This process is what Miller calls, the span of immediate memory, and deals more with retention of multiple stimuli. Miller denotes that there is a clear operational similarity between the absolute judgement experiments and immediate memory (Miller 1956). Although both processes yield magnitudes of about seven, these studies impose different limitations on our ability to process information (Miller 1956). He affirms that “absolute judgment is limited by the amount of information, while immediate memory is limited to the number of items” (Miller 1956).


Furthermore, theories in cognition state that transfers2 are how information is effectively stored in memory (Ertmer 1993). Whereby, accurate learning exists only when a learner can apply such knowledge to a different context (Ertmer 1993). This brings me to recoding. Recoding is where user’s reorganize given sequences, by inputting them into smaller units or groups and reformatting (or transferring2) them to different contexts (Miller 1956). This is something we do subconsciously in our everyday lives, as we often rephrase conversations or ideas into our own words, to form more meaningful associations (Miller 1956). For example, when recoding is adapted to a sequence of unrelated words in an absolute judgement task, the channel capacity increases from about five to around fifteen items, when grammar and meaning are applied (Baddeley 2010). Basically, if we can group sequences of items into smaller groups, our transmitted capacity will increase, especially if we can recode in our own words (Miller 1956).


Clearly there is a limit to the span of immediate memory and absolute judgement, but we are not strictly limited to this capacity, as we have adapted and learned ways around this (Miller 1956) such as:

  1. Increasing the number of dimensions by utilizing differing stimuli
  2. Arranging a sequence into smaller groups by combining multiple absolute judgements together

Sensory Modalities:

We as humans, naturally perceive the world as a unified source of sensations, but we often forget that these perceptions are the combination of a variety of different unimodal interactions (Turk 2013). We use these senses, in unity with our experiences, to actively explore, understand and give explanation to our natural world (Turk 2013).

Furthermore, when we integrate unimodalities, we do not get a sum, but rather an integrated whole (Turk 2013). Our natural perception processes concern systems that use either multiple modalities or multiple channels, a.k.a. multimodal integration (Turk 2013).

Multimodal Integration:

Contrary to the natural multi-sensory human interactions we experience in our daily lives, HCI (human-computer interaction) has traditionally been focused on unimodal communication (Turk 2013). Albeit, while technically every interaction with a computer is multimodal (such as, sounds and light when the hard drive starts up), I am specifically talking about interactive computing with data input and output (Turk 2013). Nevertheless, now that microcontrollers and sensors are finally being made readily available and at affordable prices, more in-depth research into multimodal HCI can finally be realized.

Richard Bolt at the MIT Architecture Machine Group (now MIT Media Lab) is regarded as the first to design a device with these multimodalities in mind (Turk 2013). Bolt and his team built the Media Room, where they exhibited Put That There, a device that integrated gesture and voice inputs from a participant into natural interactions with a spatial wall display (Turk 2013). The importance of this piece was that none of the phrases could be interpreted solely from the gesture or the speech alone, they were required to be used in tandem (Turk 2013) This compounding of sensory inputs engendered a more natural mode of the interaction, from human-to-machine, and then machine back to human (Turk 2013).

Computer scientist and researcher, Leah Reeves, has compiled a set of guidelines for designing multimodal human-computer interactive devices (Reeves 2004). Her guidelines detail the importance of designing for the most accessible and broadest range of users and conditions (Turk 2013). The first being that the device should “maximize human cognitive and physical abilities” by utilizing specific advantages of each modality, to reduce a user’s memory load (Reeves 2004). For example, she states that the presentation of visual stimuli should be combined with a user’s input of spatial information for more effective retention (Reeves 2004).

The second principle is one that I would like to highlight, as it refers to the systems adaptability to the needs of different users and contexts (Reeves 2004). Which brings me to the importance of a common multimodal myth, just because a system is designed for multimodal use, does not mean it will be used, as such (Oviatt 1999). On the contrary, each user brings with them their own abilities, experiences and perceptions and applies them accordingly (Oviatt 1999). Studies have shown that users are more likely to combine such multimodalities and unimodalities to better fit their needs and preferences (Oviatt 1999). While exhibiting Mastermind 3.0 this past summer, I observed how participants play and what strategies they use, as a basis for my research. The feedback I received from players expressed that there were a variety of different abilities and cues used to try and beat Mastermind 3.0. One participant expressed that he was color blind, but that he was still heavily motivated to beat the game by applying his auditory skills and visuo-spatial understanding. This also seemed to be the case for many of the musicians who tried to defeat Mastermind 3.0; some musicians were found not considering or attending to the rings, which ended up being detrimental to their sequencing of the notes, as the notes were often correct but too early or too late in the sequence. As Mastermind 3.0 is an ear-training device, obviously auditory signals are the most prominent sensorium utilized. However, as proven by the defeat of some of the users, it cannot always exist on its own; one must apply their visual, auditory and spatial awareness to achieve a victory over Mastermind 3.0.

Lastly, there is feedback and error prevention. Reeves maintains that systems should generate functions and responses that can be easily understood (Turk 2013). As previously mentioned, the original game of Mastermind, included feedback with black and white pegs to instruct the user with hints and clues – without them, the game would become nearly impossible and likely boring. While designing Mastermind 3.0, I wanted to maintain the fundamentals of this feedback by providing users with a clearly defined “hint” key. The “hint” key not only replays the target sound sequence but is also a way to quickly reset a turn, if the response generated was not correct or intended. Additionally, Reeves states that “errors can be minimized by providing clearly stated exits from a task, modality or system” (Reeves 2004). In Mastermind 3.0 the “start” key, also known as the “restart” key provides an out, if a game is not going as intended.

Ultimately, there are a variety of potential advantages of multimodal interfaces, which I have outlined here (Oviatt 2000):

  1. Support greater precision of spatial information (see visuo-spatial associations below)
  2. Provide users with alternative interaction methods
  3. Enhance error prevention
  4. Accommodate a wider range of users, tasks, and environments

Multimodal vs. Unimodal:  

There have been a variety of empirical studies that prove that multimodal interactive systems are generally preferred by users over unimodal alternatives (Xiao 2002). Moreover, multimodal integration creates for more flexibility and often leads to the inclusion of adaption methods that better meet the needs of more diverse users (Xiao 2002).

It would be fair to assume that adding more modalities, would produce more errors. However, on the contrary, the channel capacity for multimodal tasks is higher than for unimodal (Samman 2004). According to a study performed at the University of Central Florida, multimodal channel capacity proved to be nearly three times that of the Magical Number 7, compared to its unimodal counterparts (Samman 2004). This implies that humans process faster and more efficiently when presented in multiple modalities (van Wassenhove 2005).

Cross-Modal Interactions:

Cross-modal interactions occur when a perceptual experience is altered and then changes how another responds (O’Callaghan 2012). Cross-modality is different from multimodality in that it concerns the influence one sensory modality has on the perception of another.

Integrating perception with agency in interactive design includes all the senses and thus, needs to address interactions not only between human-to-machine but cross-modally. A neurological condition known as synesthesia, provides a biological context of cross-modal interactions (Coen 2001). Synesthesia is a condition in which stimulation of one sense, affects another sense or thought process (Afra 2009).


We have previously established a multitude of benefits that multimodal systems provide for interactive and enhanced learning between humans and machines; however, we must take it a step further by developing an understanding of the interactions within our brains, or cross-wiring.

Synesthesia is a congenital condition of which the perceptual experience of one particular stimuli, evokes supplementary experiences or sensations (often cross sensorium) (Bor 2014). Synesthesia was originally thought of as an overactive imagination or metaphorical thinking, but it is now scientifically recognized as, “an involuntary or fixed sensory mode of perception, based on atypical cross-wiring between differing sensorium” (a.k.a., involuntary synesthetic perception) (Williams 2015).

Comparatively, people with synesthesia show signs of enhanced memory over proportionately matched controls (Rothen 2012). Although we are unable to determine an absolute answer as to why synesthetes exemplify enhanced memory, researchers have hypothesized that it is primarily due to a richer world experience, which in turn, generates enhancements to systems for encoding and memory (Rothen 2012).

It is important to note that no two synesthetes encounter the same sensations or experiences. For example, two people with audio/visual synesthesia may see entirely two different things while listening to the same music (Herman 2003).

One case study of a mathematical and linguistic savant, also known to have Asperger’s Syndrome and involuntary synesthetic perception, set the record for pi memorization at just over 20,000 digits, in about five hours (Rothen 2012). The subject attributed his enhanced memorization skills to his synesthetic condition, where he sees numbers as “three-dimensional landscapes” (Holden 2005). He details that each digit has its own color and shape and sometimes sound (Holden 2005). He then, treats such sequences visuo-spatially by putting objects, numbers, etc. in physical spaces in his mind (Hughes 2018). This is an extreme example that illustrates Miller’s research into spatial recognition and recoding.

Neuroscientist Vilayanur Ramachandran wanted to analyze the extent of this subject’s savant skills by testing his memorization skills of 100 digits in 3 minutes (when size is proportional) (Holden 2005). Not only was he able to memorize 68 digits and their locations in a short amount of time, but he also retained the same 68 digits when tested again 3 days later (Holden 2005). However, when the test was given again with different size digits to disrupt the subject’s visuo-spatial memory, he only retained 16 items (Holden 2005).

This case confirmed Charles Eriksen’s findings (on non-synesthetes) which ascertained that, when size, brightness and hue all vary together in absolute relativity, the transmitted information increases substantially from 2.7 items [when measured individually] to 4.1 (Miller 1956). This resulted in an increase in channel capacity, by increasing the range of input, devoid of an increase to amount of information (Miller 1956).

While there is some evidence that suggests that synesthesia is an inherited trait, it is not conclusively genetic (Bor 2014). Furthermore, the specific experiences that emerge are likely determined by individualized factors (Newell 2015). An alternative view of the neurological condition is that it is derived from “repeated exposure to combined perception at key developmental stages” (Bor 2014). Thus, it is possible that diversely intelligent children may create “semantic hooks” to enhance memory. These unconscious memory aids may then lead to synesthetic traits, which could appear engendered in adulthood (Bor 2014). Moreover, Bor and Rothen conducted a study that tested if phenomenological3 synesthetic experiences can be learned by training non-synesthetic controls (Bor 2014). The controls participated in an extensive training module that involved reading tasks and adaptive memory, designed to reinforce 13 specific letter-color associations (Bor 2014). Typically, genuine synesthetes have been proven to outperform controls in both learning and retention (when learning and retention are in direct proportion) (Rothen 2012). Conversely, the results demonstrated that with adequate training, controls were able to produce lower scores, which resulted in an increase to color consistency (fig. 6) (Bor 2014). It is important to note that when retested three months after subsequent training, controls synesthetic phenomenology3 had mostly faded from their memories (Bor 2014). Nevertheless, Bor and Rothen’s experimentation and analysis proved that it is possible to alter how humans experience the perceptual features of the world with adequate and consistent training (Bor 2014).

Figure 6: Color Consistency Tests of non-synesthetes pre and post training


In conclusion, there is much evidence to suggest that synesthesia is linked to sources of enhanced memory and performance, as well as enhanced sensory processing. This link between enhanced memory and synesthetes is due to changes in cognitive systems where perceptions become intertwined (Rothen 2012). In addition, synesthesia can also be applied to supplementary learning outcomes in classical conditioning4 (Rothen 2018).

Mastermind 3.0 was heavily inspired by the concept of synesthesia and the belief that a mode of interaction is enhanced when presented in multiple modalities. This suggests that if a player applies only one of these sensory modalities, they would limit the amount they could remember and retain.

As proven with the case studies above, synesthetic ability is not just a fascinating phenomenon for the gifted few (Williams 2015) but can also be learnt with consistent training (Bor 2014). Therefore, Mastermind 3.0, in its current state of fifteen colors, asks participants to carefully focus on the synesthesia-like effects of the system’s auditory and visual cues, to enhance memory and retention.

Visuo-spatial Associations:

I want to briefly address some feedback I have received from some participants to facilitate a more accessible ear-training device, by turning Mastermind 3.0 into a mobile app. My response to these few has remained the same, visuo-spatial perception and understanding hold as key features in my design, as they play a crucial role in enhancing memory in multi-sensory applications (Turk 2013).

As we have already discussed with the synesthetic savant, visuo-spatial associations can provide another layer to working memory by way of multi-sensory integration. A study conducted at the University of Rome in 2011, found that visuo-spatial working memory was enhanced in multimodal interactions, compared to unimodal versions (Botta 2011). In this study, although neither the auditory, nor the visual cues, produced enhanced working memory outcomes when presented on their own, the combination of both cues prompted enhanced visuo-spatial working memory biases (Botta 2011). Botta attributes this to the multi-sensory integration processes that coexist between the two spatial cues (Botta 2011). Furthermore, this indicates the benefits of incorporating visuo-spatial cues, to affect cognitive performance in multi-sensory integration, over unimodal tasks of the same conditions (Botta 2011).


Multi-Intelligence Approach to Learning:

Now that we have set the groundwork for memory and sensory modality with a wide range of case studies and experiments, the real question is, why? In school, learning is primarily focused on math and linguistics, but this does not allow students with different talents or intelligences to expand their knowledge and interests (Jovanovic 2011). Conversely, interactive games like Mastermind 3.0, can provide a multi-intelligence5 approach to learning by highlighting certain intellects (Jovanovic 2011).


Within the study of cognition, Deci and Ryan conceived what they called, the self-determination theory (SDT) (Ryan 2000). SDT is known as one of the most well established and referenced frameworks in motivation theory today (Mekler 2017). Their theory maintains that human development is driven by the fulfillment of psychological needs (Jovanovic 2011) for “competence, autonomy, and relatedness” (Deci 2000). Moreover, the inherent satisfaction of psychological needs is required for high degrees of motivation and is therefore derivative of what people find noteworthy or meaningful (Jovanovic 2011).  Motivation is thus a byproduct of fun.

Denis and Jouvelet applied principles of SDT to instruct their definition of fun into two categories, pleasure and desire (Denis 2005). Combined, pleasure and desire create what they call, ludic tension – an inner state of immersion where a user is so engaged in an activity, they “lose track of time and the outside world” (fig. 7) (Denis 2005). In Denis and Jouvelet’s terms of “intrinsically motivated states” (Denis 2005), Mastermind 3.0 approaches this quality of discovery by providing the user with cerebral exercises that keeps players engaged, by way of attainable goals and an approachable interactive design aesthetic.

Figure 7: Intrinsic motivations – a balance between challenge and skills


Motivation has been proven to increase effective cognitive strategies for enhanced memory, while amotivation “decreases memorization and personal development” (Jovanovic 2011). Over the past few years, developers have applied principles of motivation theory to traditional learning, by way of gamification6 (Buckley 2018). Gamification6 involves the application of “motivational affordances” to non-traditional game contexts, to foster more effective engagement (Mekler 2017). This is achieved by linking the SDT approach to motivation theory that incorporate game elements, to develop a framework for multi-sensory learning (Buckley 2018), like in the case of Mastermind 3.0.

Fundamentally, games are constructed on a reward-based system, whereby, when you accomplish an achievement you will gain some sort of merit. Rewards have been proven to psychologically motivate a participant to continue playing due to the positive feedback received (Islam 2017). The more frequently a participant plays (practices) a game, the more proficient they become and thus, advance on to more difficult levels (fig.8) (Islam 2017).

Figure 8: Advantages to game-based learning


Games must first set out clear goals and lessons that can be learned through practice, and then adjusting instructions and difficulty to best meet the needs and abilities of the user (Jovanovic 2011). This process generates a more personalized approach to learning, which results in more attentiveness and thus, more motivated learners. Furthermore, this methodology will prompt higher degrees of problem-solving skills, more strategic planning, time management, multitasking, and most importantly, the ability to adapt to ever changing scenarios (Jovanovic 2011).

If we are able to better understand our users and thus, profile them accordingly, we can further encourage adaptive learning through the use of multi-intelligence5 theories (Jovanovic 2011). Jovanovic presents a model categorizing these perspective profiles into different dominant contexts of learning interfaces (fig. 9) (Jovanovic 2011).

Figure 9: Proposed player classifications


A cognitive study conducted by Jovanovic, aimed to measure the overall quality of learning games by testing the correlation between motivational factors of users and machines (Jovanovic 2011). The study divided a large class of students into smaller groups, whereby each group was to design a learning game that utilized motivational factors effectiveness, with overall quality of learning games (Jovanovic 2011). The results (fig. 10), found that groups who utilized the most motivational effects, produced higher overall quality games (Jovanovic 2011).

Figure 10: Linear Regression Model7 testing motivational effects (y-axis) with overall game quality (x-axis)



Multi-sensory integration has been proven to enhance learning by providing a multi-intelligence5 approach to perception and interaction (William 2015). Based on these findings, the gamification6 system I have applied to Mastermind 3.0, nurtures ambitions based on Deci and Ryan’s self-determination theory (Jovanovic 2011). The SDT provides strategies for optimizing motivation (Ryan 2000) by fostering comfort, agency and capability (Williams 2015). By applying these motivational theories to learning paradigms, we can increase the naturalism of training modules and therefore, produce more effective learning (Bor 2014).

Precedent Projects:

The following will detail specific multi-sensory precedent projects that have helped to inform the design of Mastermind 3.0.

Gordon Pask:

Gordan Pask was one of the lead inventors of the study of cybernetics and conversation theory (Haque 2007). Much like myself, his mechanical devices were driven by theoretical principles in cognitive psychology (Bird 2008). This is why Pask considered himself a “mechanical philosopher,” as he was far more interested in how we understand, understanding, rather than the understanding of things them self (Bird 2008). What differentiates cybernetics, and more specifically Pask, from traditional psychology or engineering is his approach to learning and knowledge within the field of performance (Pickering 2010). Much of my design for Mastermind 3.0 has been informed by a combination of Pask’s mechanical philosophies with traditional theories in psychology.

Aesthetically Potent Environments:

As Pask states, “Man is prone to seek novelty in his environment and, having found a novel situation, learn how to control it”. In this context, control denotes as ‘explaining’ or ‘relating’ to an existing body of experience’ (Pask 1971). This methodology has informed what he defines as, aesthetically potent environments (Pask 1971).

Pask defines aesthetically potent environments using four main principles: The first being that the design, must offer sufficient variety – just enough to keep participants engaged, but not too much that it becomes chaotic and incomprehensible (Pask 1971).

The second and third principles go hand in hand; the second, specifies the necessary inclusion of recognizable concepts that can be learnt, by way of the third. The third states the importance of providing clear instructions or clues to guide participants through known and un-known concepts (Pask 1971). As it currently stands, Mastermind 3.0 is supported by a standard MIDI piano keyboard, that provides distinct cues and instructions on how to play (“start” and “hint” keys).

Finally, the fourth and most important being that [the design] “respond to the participant, by engaging him in conversation” and adapting accordingly. As previously discussed, training paradigms foster motivation and engage learners when systems are designed with adaptability in mind.

Regardless, Pask affirms that this final principle is not required to be an aesthetically potent environment, however, it fulfils what he believes to be truly novel works of art (Pask 1971). It is important to note that while Mastermind 3.0 does not currently address this fourth principle, my goal since conception has remained fixed on creating an adaptable piece that responds to users’ abilities and needs. The future design of Mastermind 4.0 will explore this method of adaptation and conversation further, by including an optional practice or “free-play” sequence. This practice or “free-play” sequence, will be devoid of all instructional cues, to foster a more personalized development to learning audio-visual associations [in the context of Mastermind 3.0]. Ultimately, this would provide users with an option to follow the game’s directions or to learn in their own terms.

Additionally, I have been keen on encoding a system of intelligent machining9, that responds to the players abilities, by varying the speed at which signals are given. Adjusting the difficulty of game play will help to accommodate a larger scope of users – from musicians, to inexperienced music enthusiasts alike.


My initial motivation for designing Mastermind 3.0 was my fascination with the concept of synesthesia. Much of my interest in this subject began after researching Pask’s Musicolour: a performance system of colored lights that respond to audio input from a human performer (Haque 2007). The output of the lights was dependent upon frequency and rhythm from the performer (Pask 1971).

Pask, was also notably curious of synesthetic perception during this time, as the augmentation of light from sound, was rare in the 50’s (Pask 1971). He was aware that if a synesthetic experience was to appear, it would differ among performers and audience members (Pask 1971). Hence, his approach to encode the system with an adaptable learning function, intended to modify the auditory signals to an encoded visual language (Pask 1971).

Not only did Musicolour provide a bespoke visual language based on a human performer’s signals, it also had the ability to grow “bored” if the rhythm or frequency became too static (Haque 2007). Eventually, the system would stop illuminating the lights, to instruct the user to change up the input (Haque 2007).

Although Pask’s initial interest in the project was synesthesia, he quickly realized that the  learning capabilities of the machine are what made it so unique (Pask 1971). The human performer “trained the machine and it played a game with him” but not in a static or predictable way (Pask 1971). This created an infinite loop where the machine responded to the human performer’s improvisations and unpredictability, by interpreting it and feeding it straight back to the performer (Pickering 2010). Which begs the question: Who’s controlling who?

The answer is that the human did not control the performance, nor did the machine; They worked in tandem to create an extension of one another, wherein which the machine learns and adapts with the human performer, while the human performer learns and adapts with the machine (Bird 2008). This is central to Pask’s theory that “man” is essentially adaptive, and machines can thus mimic human behavior (Pickering 2010).


Pask wanted to take his adaptive machining methodology and apply it to more universal systems by returning to his initial concentration: learning (Bird 2008). In the mid-1950’s, as technology began to develop at an alarming rate, so did most commercial businesses; ergo, the need for competent keyboard operators (Bird 2008). Pask created what he called the first “Self-Adaptive Keyboard Instructor” or SAKI (Haque 2007).

Figure 11: Operator using SAKI


Essentially, SAKI was a training device that tested participants speed and accuracy in typing alphabetic and numeric symbols, with a 12-key keyboard (Haque 2007). The system guided participants with light cues (arranged in the same spatial layout as the keyboard) to press relative keys to encode data (fig. 11) (Bird 2008). Initially, items were randomly presented at a slow and uniform rate, with corresponding lights remaining on for a long period of time (Bird 2008). The machine stored the operator’s response time for each item till all four exercise lines had been achieved (Bird 2008). SAKI provides an unequivocal answer to Pask’s four principles of aesthetically potent environments, by varying the difficulty of tasks for each item to best meet the needs and capabilities of user’s (Bird 2008).

Pask’s aim for SAKI was to mimic the possible relationship between teacher and student – wherein which, a human teacher responds directly to a student’s proficiencies, by focusing on certain weaker aspects of measured areas (Haque 2007). The machine not only responds to the student’s current input, but also adapts its responses based on prior interactions (Haque 2007). Much like Pask’s application of Musicolour, the machine responds to the student, while simultaneously, the student responds to the machine (Haque 2007).

Not only is the machine being treated as a black-box,9 but the user is as well (Bird 2008). The machine then tries to imitate the non-stable characteristics to create a relationship between itself and the user (Bird 2008). The feedback received is constantly updating and adjusting variables to reach a desired goal (Bird 2008). Basically, the system is conditioning you, and conversely you are conditioning it (Bird 2008).

Pask concluded that people are motivated by the desire to reach stable interactions with machines, rather than to reach any particular performance goal (Bird 2008).

Chris Creed & Paul Newland, MEDIATE:

MEDIATE (Multi-sensory Environment Design for an Interface between Autistic and Typical Expressiveness) was a multi-sensory environment designed for children on the autism spectrum with limited verbal and social skills (Williams 2015). Psychologically, it centered around agency in a multi-sensory environment, devoid of any social context (Williams 2015).

Attributed to their lack of social skills, children with autism often struggle with the experience of control, as their world is often chaotic and unpredictable (Williams 2015). MEDIATE was different in its design because it provided children on the spectrum with a place to interact physically with the world through embodied10 learning, by affording them control of their environment, behavior and expressiveness (Williams 2015).

MEDIATE, had a wide array of visual, audio and tactile interfaces with organic and active materials and shapes (Williams 2015). The system was designed to be adaptive by creating individual sensory profiles based on the behaviors of its users (Williams 2015).

While MEDIATE was designed specifically for children with autism, it was also an opportunity for parents and caregivers to observe their child’s behavior and sensory preferences (Williams 2015). Notably, one mother observed as her son with Asperger’s Syndrome became captivated by the TuneFork; the more he played, the more complex the interaction level became (Williams 2015). Eventually, he was able to change the color of the screens by tapping the TuneFork and was observed continuously selecting a purple hue (Williams 2015). His mother expressed that she believed this was a form of sensory expression, so she proceeded to paint his bedroom walls purple (Williams 2015). Subsequently, the child behaved more calmly at home and was able to sleep through the night for the first time in years (Williams 2015).

MEDIATE provides a foundation that establishes an embodied10 multi-sensory approach to reach disadvantaged groups. Although currently, Mastermind 3.0 does not incorporate methods of embodied10 learning or conversational adaptability, Mastermind 4.0 aims to achieve that by expanding the scope to more diverse users (see below).


In this paper, I have outlined a theoretical framework, comprised of collated research from information measurement, developments in multi-sensory integration, and psychological proficiencies and motivations, to provide a new approach to learning, characterized by perceptual content. My analysis provides a psychologically motivated foundation to integrating a multi-sensory approach to cognitive interactive games.

My research has afforded Mastermind 3.0 with methodologies and procedures that asses user’s psychological characteristics, abilities, and preferences, to better identify how people interact and make sense of the world. As previously discussed, identifying more individualized user profiles has been proven to enhance memory and retention by motivating learners on a more personal level. That said, in order to develop a more personalized approach to multi-sensory learning, Mastermind 3.0 will need to explore techniques of adaptive processing, based on empirical research.

Mastermind 4.0:

While Mastermind 3.0 has employed a rich theoretical framework to instruct its design, future empirical studies of user’s psychological proficiencies and incentives, need to be tried. For example, I could test the channel capacity of musically trained vs non-musically trained participants. Thereby, adopting these results to instruct the design of Mastermind 4.0.

A speculative design for Mastermind 4.0 (fig. 12) incorporates previous methodologies with affordances, to explore a more embodied10 approach to learning, or what I like to call, Mastermind meets Twister12. Mastermind 4.0 will maintain the same basic game play as before, but will only concern 8 colors, instead of the previous 15. This will further benefit enhanced memory by incorporating less items, but larger sequences (tasks in absolute judgement). Through this new embodied10 approach, the players entire body becomes the controller of the keys (previously a player’s finger in Mastermind 3.0).

Figure 12: Preliminary sketch of Mastermind 4.0


Not only will players now be fully immersed in their environment, but my aim is to also implement a set of guidelines to the system that allow for as much self-organization, as possible, such as, the aforementioned: practice and “free-play” modes (see SAKI, MEDIATE). Furthermore, the importance of conversational adaptability between machine and user is necessary to further enhance memory and retention. This will be achieved by providing a more personal approach to learning by varying the speed of signals, based on a player’s skillsets. Therefore, the system can potentially become an extension of the user’s body, when the complexity of interaction is increased, through repetitious and consistent training challenges.

In conclusion, this review can be used as a basis for collaborative development within the field of interactive multi-sensory game design. Further empirical studies will need to be conducted on Mastermind 3.0, to develop a language of information capacity to inform Mastermind 4.0.  Ultimately, I will continue exploring the cognitive attributes of multi-sensory engagement by simplifying and advancing the design processes of mechanisms for interactive learning games.



  1. Alternatives: being different options for decision making (Miller 1956)
  2. Transfers: when a learner understands how to apply knowledge in a different context in cognitive theory (Ertmer 1993)
  3. Phenomenology: the study of phenomena (essences) in philosophy, separate from the natural world (Meleau-Ponty 1956)
  4. Classical Conditioning: naturally occurring stimulus producing an unconditioned response (Skinner 1974)
  5. Multi-Intelligence: traditional approaches to learning focus mainly on mathematics and linguistics, however multi-intelligence theory affirms there are 8 ways in which humans learn so treating everyone the same in regard to intellect is wildly unfair (Lynch 1995)
  6. Gamification: applying game design elements to nongame contexts to enact effective motivating behavior (Buckley 2018)
  7. Linear Regression Model: linear approach to the relationship between a dependent and independent variable (Wikipedia 2019)
  8. Intelligent Machining: in regard to, robots that can be programed to take actions or make choices based on input from sensors (Intelligent Robot 2019)
  9. Black-boxes: first coined by Ashby concerns intelligent computing, of which a device or system appears to be intelligent8 but we have no concept of the inner mechanisms (Glanville 1982)
  10. Embodied: be an expression of or give a tangible or visible form to (an idea, quality, or feeling) (Stolz 2015)
  11. Twister: a game produced played on a large plastic mat with a row of 6×6 colors  is spread on the floor and players are given instructions of where to put body parts to get on be on the correct color (Wikipedia 2019)


Afra, P., Funke, M., & Matsuo, F. (2009). Acquired auditory-visual synesthestia: A window to early cross-modal sensory interactions. Psychology Research and Behavior Management, 2, 31–37.

Baddeley, A. (2010). Working memory. Current biology20(4), R136-R140.

Beer, S. (1960). Cybernetics and management.

Berger, C. C., & Ehrsson, H. H. (2013). Mental imagery changes multisensory perception. Current Biology, 23(14), 1367–1372.

Bird, J., & Di Paolo, E. (2013). Gordon Pask His Maverick Machines. The Mechanical Mind in History, 185–211.

Bor, D., Rothen, N., Schwartzman, D. J., Clayton, S., & Seth, A. K. (2014). Adults can be trained to acquire synesthetic experiences. Scientific Reports, 4.

Botta, F., Santangelo, V., Raffone, A., Sanabria, D., Lupiáñez, J., & Belardinelli, M. O. (2011). Multisensory Integration Affects Visuo-Spatial Working Memory. Journal of Experimental Psychology: Human Perception and Performance, 37(4), 1099–1109.

Brankaert, R., Ouden, E. Den, Buchenau, M., Suri, J. F., de Valk, L., Bekker, T., … Bozarth, M. A. (2009). Experiential Probes: probing for emerging behavior patterns in everyday life. International Journal of Design, 9(1), 2880–2888.

Buckley, J., DeWille, T., Exton, C., Exton, G., & Murray, L. (2018). A Gamification–Motivation Design Framework for Educational Software Developers. Journal of Educational Technology Systems, 47(1), 101–127.

Caschera, M. C., D’Ulizia, A., Ferri, F., & Grifoni, P. (2012). Towards evolutionary multimodal interaction. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7567 LNCS, 608–616.

Coen, M. H. (2001). Multimodal integration – A biological view. IJCAI International Joint Conference on Artificial Intelligence, 1417–1424.

Connelly, L. M. (2010). What is phenomenology? Medsurg Nursing: Official Journal of the Academy of Medical-Surgical Nurses, 19(2), 127–128.

Covaci, A., Ghinea, G., Lin, C. H., Huang, S. H., & Shih, J. L. (2018). Multisensory games-based learning – lessons learnt from olfactory enhancement of a digital board game. Multimedia Tools and Applications, 77(16), 21245–21263.

Cowan, N. (2015). George Miller’s magical number of immediate memory in retrospect: Observations on the faltering progression of science. Psychological Review, 122(3), 536–541.

Denis, G., & Jouvelot, P. (2005). Motivation-driven educational game design: Applying best practices to music education. ACM International Conference Proceeding Series, 265, 462–465.

Driver, J., & Spence, C. (1998). Cross-modal links in spatial attention. Philosophical Transactions of the Royal Society B: Biological Sciences, 353(1373), 1319–1331.

Eriksen, C. W., & Hake, H. W. (1955). Accuracy of Discrimination l. Journal of Experimental Psychology, 50(3).

Ertmer, P. A., & Newby, T. J. (1993). Behaviorism, Cognitivism, Constructivism: Comparing Critical Features from an Instructional Design Perspective. Performance Improvement Quarterly, 6(4), 50–72.

Focardi, R., & Luccio, F. L. (2012). Guessing Bank PINs by Winning a Mastermind Game. Theory of Computing Systems, 50(1), 52–71.

Friedrich, J., Becker, M., Kramer, F., Wirth, M., & Schneider, M. (2019). Incentive design and gamification for knowledge management. Journal of Business Research, (February).

Galati, G., Pelle, G., Berthoz, A., & Committeri, G. (2010). Multiple reference frames used by the human brain for spatial perception and memory. Experimental Brain Research, 206(2), 109–120.

Gierasimczuk, N., Van der Maas, H. L., & Raijmakers, M. E. (2013). An analytic tableaux model for deductive mastermind empirically tested with a massively used online learning system. Journal of Logic, Language and Information22(3), 297-314.

Glanville, R. (1982). Inside every white box there are two black boxes trying to get out. Behavioral Science27(1), 1-11.

Glanville, R. (2009). A (Cybernetic) Musing: Design and Cybernetics. Cybernetics and Human Knowing, 16(3), 175. Retrieved from

Haque, U. (2007). The architectural relevance of Gordon Pask. Architectural Design77(4), 54-61.

Herman, S. (2003). Synesthesia. Global Cosmetic Industry171(4), 54-54.

Herring, S. R., & Rights, A. (2008). Working Memory Working Memory. ReCALL, 20(4), 1–16.

Holden, C. (2005). Colored Memory. Science, 308(5721), 492.

Hughes, J. E. A., Gruffydd, E., Simner, J., & Ward, J. (2019). Synaesthetes show advantages in savant skill acquisition: Training calendar calculation in sequence-space synaesthesia. Cortex, 113, 67–82.

Islam, A. (2017). Cross-Modal Computer Games as an Interactive Learning Medium. (April), 82–90.

Janich, P. (2018). What is information? (Vol. 55). U of Minnesota Press.

Jovanovic, M., Starcevic, D., Minovic, M., & Stavljanin, V. (2011). Motivation and multimodal interaction in model-driven educational game design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 41(4), 817–824.

Laming, D. (2010). Statistical information and uncertainty: A critique of applications in experimental psychology. Entropy12(4), 720-771.

Luce, R. D. (2003). Whatever Happened to Information Theory in Psychology? Review of General Psychology, 7(2), 183–188.

Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279–284.

MacLeod, C. M., & Risko, E. F. (2017). Radical Cognitivism? Distinguishing Behavior from Thought. Journal of Applied Research in Memory and Cognition, 6(1), 22–26.

Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746–748.

Mekler, E. D., Brühlmann, F., Tuch, A. N., & Opwis, K. (2017). Towards understanding the effects of individual gamification elements on intrinsic motivation and performance. Computers in Human Behavior, 71, 525–534.

Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.

Newell, F. N., & Mitchell, K. J. (2016). Multisensory integration and cross-modal learning in synaesthesia: A unifying model. Neuropsychologia, 88, 140–150.

O’Callaghan, C. (2012). Perception and Multimodality. The Oxford Handbook of Philosophy of Cognitive Science, (September), 1–28.

Oviatt, S. (1999). Ten myths of multimodal interaction. Communications of the ACM42(11), 74-81.

Oviatt, S., & Cohen, P. (2000). What Comes Naturally That Process. Communications of the ACM, 43(3), 45–53.

Oviatt, S. (2003). Advances in robust multimodal interface design. IEEE computer graphics and applications, (5), 62-68.

Pask, G. (1971). A Comment, a Case History and a Plan. Cybernetics, Art and Ideas, 76–99. Retrieved from Cybernetic Serendipity Musicolour and Colloquy of Mobiles.pdf

Pask, G. (1975). The cybernetics of human learning and performance: A guide to theory and research. Hutchinson.

Pask, G., Elisabeth, T., & York, A. (1976). Conversation Theory – Applications in Education and Epistemology.

Petri, H. L., & Mishkin, M. (1994). Behaviorism, cognitivism and the neuropsychology of memory. Am.Sci., 82(1), 30–37.

Pickering, A. (2010). The cybernetic brain: Sketches of another future. University of Chicago Press.

Pollack, I. (1953). Assimilation of sequentially encoded information. The American journal of psychology66(3), 421-435.

Pollack, I. (1954a). The Information of Elementary Auditory Displays Published by the Acoustical Society of America The Information of Elementary Auditory Displays. 449(1952).

Pollack, I., & Ficks, L. (1954b). Information of elementary multidimensional auditory displays. The Journal of the Acoustical Society of America26(2), 155-158.

Porta, M. information theory. In Last, J. (Ed.), A Dictionary of Public Health. : Oxford University Press. Retrieved 16 Sep. 2019, from

Reeves, L. M., Lai, J., Larson, J. A., Oviatt, S., Balaji, T. S., Buisine, S., … & McTear, M. (2004). Guidelines for multimodal user interface design. Communications of the ACM47(1), 57-59.

Rothen, N., Meier, B., & Ward, J. (2012). Enhanced memory ability: Insights from synaesthesia. Neuroscience and Biobehavioral Reviews, 36(8), 1952–1963.

Rothen, N., Seth, A. K., & Ward, J. (2018). Synesthesia improves sensory memory, when perceptual awareness is high. Vision Research, 153(September), 1–6.

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist55(1), 68.

Sagiv, N., Simner, J., Collins, J., Butterworth, B., & Ward, J. (2006). What is the relationship between synaesthesia and visuo-spatial number forms? Cognition, 101(1), 114–128.

Samman, S. N., Stanney, K. M., Dalton, J., Ahmad, A. M., Bowers, C., & Sims, V. (2004). Multimodal Interaction: Multi-Capacity Processing Beyond 7 +/− 2. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(3), 386–390.

Schunk, D. H. (1991). Learning theories an educational perspective. Macmillan.

Siegel, J. A., & Siegel, W. (1972). Absolute judgment and paired-associate learning: Kissing cousins or identical twins?. Psychological Review79(4), 300.

Turk, M. (2014). Multimodal interaction: A review. Pattern Recognition Letters, 36(1), 189–195.

Van Wassenhove, V., Grant, K. W., & Poeppel, D. (2005). Visual speech speeds up the neural processing of auditory speech. Proceedings of the National Academy of Sciences102(4), 1181-1186.

Williams, R. (2015). Synesthesia: From Cross-Modal to Modality-Free Learning and Knowledge. Leonardo, 48(1), 48-15.

Xiao, B., Girand, C., & Oviatt, S. (2002). Multimodal integration patterns in children. 7th International Conference on Spoken Language Processing, ICSLP 2002, (May), 629–632.

Supplementary References:

Intelligent Robot. (2019). Retrieved September 2019, from https:/

Karch, M. (2019). A Beginner’s Guide to Apps. Retrieved September 2019, from

Lynch, W. M. (1995). Multiple Intelligences. Teaching Education, 7(1), 155–157.

Merleau-Ponty, M., & Bannan, J. F. (1956). What is phenomenology?. CrossCurrents, 6(1) 59-70.

Retro Mastermind Game. (2019). Retrieved September 2019, from

Skinner, B. (1974). About behaviourism / B.F. Skinner. London: Cape.

Stolz, S. A. (2015). Embodied Learning. Educational Philosophy and Theory, 47(5), 474–487.

Wikipedia contributors. (2019, September 7). Linear regression. In Wikipedia, The Free Encyclopedia. Retrieved 13:09, September 16, 2019, from

Wikipedia contributors. (2019, July 18). Twister (game). In Wikipedia, The Free Encyclopedia. Retrieved 15:25, September 16, 2019, from

Wolz, S. H., & Carbon, C.-C. (2015). Images in Art and Science and the Quest Image Science. Leonardo, 48(1), 74–75.

Image References:

Figure 1: Retro Mastermind Game. (2019). Retrieved September 2019, from

Figure 2: Yagilowich, A., (2019). Korg Mini MS 20 Synthesizer with rainbow keys, JPEG

Figure 3: Yagilowich, A., (2019). Mastermind 3.0 Color Associations, screenshot, PSD, Photoshop

Figure 4: Yagilowich, A., (2019). Mastermind 3.0 setup, JPEG

Figure 5: Pollack, I., & Fricks, L. (1945b). Information of elementary multidimensional auditory displays. The Journal of the Acoustical Society of America, 26(2), 155-158

Figure 6: Bor, D., Rothen, N., Schwartzman, D.J., Clayton S., & Seth, A.K. (2014). Adults can be trained to acquire synesthetic experiences. Scientific Reports, 4.

Figure 7: Ryanm R.M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1), 68.

Figure 8: Islam, A. (2017). Cross-Modal Computer Games as an Interactive Medium. (April), 82-90.

Figure 9: Jovanovic, M., Starcevic, D., Minovic, M., & Stavljanin, V. (2011). Motivation and multimodal interaction in model-driven educational game design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 41(4), 817–824.

Figure 10: Jovanovic, M., Starcevic, D., Minovic, M., & Stavljanin, V. (2011). Motivation and multimodal interaction in model-driven educational game design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 41(4), 817–824.

Figure 11: Watters, A. (2019). Gordon Pask’s Adaptive Teaching Machines. Retrieved 16 September 2019, from

Figure 12: Yagilowich, A., (2019) Speculative design for Mastermind 4.0, PSD, Photoshop

Submit a Comment