This was my Masters thesis project at the Tangible Embodied Child-Computer Interaction Lab, School of Interactive Arts and Technology, Simon Fraser University, Canada.
Novice tangible interaction design students often find it challenging to generate input action ideas for tangible interfaces. To address this problem I designed and evaluated a tool consisting of interactive physical artifacts coupled with digital examples of tangible systems and technical implementation guidance. The tool helped in generating input action ideas by enabling to experience input actions, supporting hands-on explorations, and introducing possibilities. However, introducing examples at times caused design fixation.
I published the project at ACM CHI in 2019 and 2021. ACM Conference on Human Factors in Computing Systems (CHI) is a premier peer-reviewed conference for human-computer interaction (HCI). Google Scholar and SJR list ACM CHI as the number 1 publication venue for HCI research with SJR h-index 165. Being the best, it has a low acceptance rate of around 25%.
Duration
January 2018 - May 2020 (2 years 5 months)
Tangible interaction design (TIxD) involves designing tangible user interfaces (TUIs). TUIs involve physical artifacts that represent digital information and serve as interactive controls of the computational media [1]. Such interfaces enhance interaction with computational applications by building upon users’ knowledge and skills of interacting with their physical environments [2,3,4].
Input actions are the physical actions (e.g. rotate, slide, squeeze) performed by the users on the interface to interact with the computational application.
Following is an example of a TUI from the Tangible Media Group, MIT Media Lab.
I followed the design research methodology [2] to discover, redefine, and investigate the problem by designing and evaluating interventions based on empirical and secondary research data.
Tasks and Methods
Secondary research
Literature research
Competitive analysis
Problem framing and scoping
User research
Contextual inquiry
Shadowing
Interviews
Design
Idea generation
Selection matrix
Idea detailing
Wireframing
Evidence-based design
Iterative prototyping
Physical prototyping
Evaluation
Case study
Observation
Expert review
Heuristic markups
Lab study
Pilot study
Interviews
Surveys
Alignment table
Data analysis
Affinity analysis
Open coding
Axial coding
Selective coding
Triangulation
Statistical analysis
Video analysis
Design deliverable analysis
Literature Review
I did a literature review covering in and out of tangible interaction design (TIxD) including theories, frameworks, history, strengths & limitations, application areas, design principles & methods, tools, and education. The literature research also involved four other domains:
Competitive Analysis
I did competitive analysis of tangible tools for idea generation and design tools for tangible interaction design.
Insights from Secondary Research
Students
3 male
25-34 years old
Novice students
Lecturer
Female
Course
Participated in a graduate course
Scoping
I focused on novice tangible interaction design students - students learning tangible interaction design for the first time and with no prior experience of the domain and similar domains that involve embodied interactions (e.g., gestural interfaces).
Objectives
Understand how novice tanigble interaction design students generate ideas for input actions.
Identify challenges faced by these students in generating ideas for input action.
User Screening
I used convenience sampling to screen a tanigble interaction design lecturer and novice tanigble interaction design students.
Data Collection
I did contextual inquiry and participatory shadowing by enrolling in a four months project-based graduate-level tanigble interaction design course (IAT 884) at SIAT, SFU that involved many novice tanigble interaction design students. It gave me a first-hand experience of the course activities along with the opportunity to observe, interact, and discuss with novice tanigble interaction design students as their peer.
Individual semi-structured interview with the lecturer of IAT 884, who had around 10 years of teaching experience, involving open-ended questions like -
How satisfied you usually are with the input action ideas generated by your students?
Have you noticed your students facing challenges when generating input action ideas?
Individual semi-structured interviews (video recorded) with 3 IAT 884 students, involving open-ended questions like -
Can you tell me how you came up with the TUI idea for your course project?
Why did you choose to incorporate this particular input action in your TUI idea?
Did you face any challenges when generating ideas for input actions?
Data Analysis
I transcribed the interviews and did affinity analysis of the qualitative data.
Insights
I designed a tangible tool, IdeaBits, as an exploratory research instrument to identify potential ways of supporting novice tangible interaction design students to generate input action ideas and things that should be avoided. The video below introduces IdeaBits.
Introducing the first version of IdeaBits.
The graphical user interface of revised IdeaBits.
Design Goal
Help novice tangible interaction design students to generate as many input action ideas as possible (preferably diverse) without worrying about their value.
Features
Design Rationales
I conducted an exploratory case study [20] to identify ways in which potential support can be provided to novice tangible interaction design students to generate input action ideas and things that should be avoided. I investigated students’ interactions and experience with IdeaBits, while they used it to solve a given design problem.
Research Questions
In what ways do users use IdeaBits 2.0 to generate TUI ideas?
In what ways do users think IdeaBits 2.0 supports them to generate TUI ideas?
What challenges and limitations do users face while using IdeaBits 2.0 to generate TUI ideas?
User Recruiting and Screening
To meet the tight timeline, I quickly recruited participants for the study within 5 days. I designed eye-catching flyers and posters to recruit participants. I put up the posters on the notice boards at the School of Interactive Arts and Technology (SIAT) and on social media platforms. I gave brief presentations of the study at lectures and labs of 8 SIAT undergraduate courses, followed by the distribution of the flyers. I also signed up interested students on the spot by collecting their contact information for follow-up.
Using a screening questionnaire I screened novice tangible interaction design students based on their educational and work experience in tangible interaction design or similar domains involving embodied interactions (e.g. gestural interfaces, virtual reality). I also considered their ability to communicate in English and with no self-identified physical or cognitive disabilities.
After conducting the evaluation with 9 participants, I found that the data was repeating, and there was no new significant data. Hence I stopped the study after running it with 12 participants.
Evaluation Procedure
The evaluation involved one-hour individual design sessions conducted in a lab at the School of Interactive Arts and Technology. I provided participants IdeaBits, including both sets of artifacts so that the participants could use whichever they preferred. I also provided some stationery and modeling materials.
I began the evaluation sessions by introducing participants to IdeaBits and the design task. To avoid participants from getting familiar with the examples in IdeaBits, I introduced the graphical user interface by showing tangible user interface examples of only one input action while encouraging participants to interact with only one artifact from each set. I mentioned that they were expected to use the prototype but were not limited to the introduced input actions and examples.
I then conducted the video recorded 1-hour design session during which the participant was left alone in the room to do the given design task. I remotely observed these sessions from another room by live-streaming the video. I took observation notes such as what they were doing (building prototype, sketching, etc.), what and how they were using (which artifact, modeling material, etc.), and anything unexpected that stood out in relevance to the research questions.
Remote observation using TeamViewer [21] and live-streaming of video recorded design sessions.
Following the design session, I conducted semi-structured interviews (video recorded) to investigate how the participants used IdeaBits to do the design task and their experience. I asked open-ended questions like -
Can you walk me through how you came up with this final idea?
What are the ways you used IdeaBits to do this design task?
In what ways, if any, do you think IdeaBits hindered or limited you in doing the design task?
During the interviews I also followed up on the crucial findings from remote observation by using open-ended questions to probe for reasons behind the observed behaviors. I ended the evaluation session by providing the participants with remuneration of $25 CAD.
Design Task
I generated several ideas for the design task and selected the final one based on several factors such as-
Data Collection
Pilot Study
I iteratively developed the evaluation procedure, design task, and data collection methods discussed above by running a pilot study with 5 participants, following which I evaluated IdeaBits with 12 participants. Some of the insights from the pilot study include -
Almost all the pilot participants assumed that they were limited to the input actions introduced by IdeaBits since they were asked to use the tool during the design session. Only some of them asked during the introduction to verify this assumption. Hence to maintain uniformity among the participants and avoid any implicit assumptions, I mentioned during the introduction that they could use any input actions and were not limited to the five introduced possibilities.
Some of the pilot participants mentioned that they lost track of time. I added voice reminders to the 1-hour timer [23] at 40, 20, and 5 minutes remaining to help the participants manage their time.
I initially placed the two tables in the room parallelly next to each other. The first two pilots mentioned that they found it difficult to shift between the tables. I then put the tables at 90 degrees. This arrangement allowed the pilot participants to move without leaving the chair with casters and hence facilitated switching between the two tables quickly.
I collaborated with two peer tangible interaction design researchers (Ofir, and Victor) to analyze the data. We did affinity analysis, open coding, axial coding, selective coding, triangulation, video analysis, and statistical analysis.
Interview Analysis
I and Ofir iteratively and inductively analyzed the interview data using open, axial, and selective coding [24-26].
We individually analyzed two interviews at a time followed by a meeting to discuss our analysis and arrive at a set of mutually agreed-upon themes. Next, we used these themes to individually analyze two more participants. This process was repeated until the set of themes required no more changes. We triangulated different data sources, used detailed descriptions, and presented disconfirming evidence, which contributes to validity.
Participants' Deliverables Analysis
In deliverable 3, the participants listed the emotions, input actions, and outputs in their final idea. I analyzed this data looking for the types of input actions and outputs in the participants’ final ideas while comparing it against 1) input actions and outputs in IdeaBits 2.0 and 2) input actions not in IdeaBits 2.0 that are (not) found commonly in everyday devices and TUIs. In deliverable 4, the participants listed the electronic components required to prototype their final idea. I and Victor analyzed this data for completion, feasibility, errors, and the presence of the listed electronic components in the sensor-enabled artifacts.
Other Data Analysis
We also analyzed the video recording of the design sessions and interviews, participants' deliverables, and remote observation notes to-
Clarify and crosscheck the interview transcript data as needed.
Look for evidence that supported or was in discrepancy with the themes that emerged from the interview data, while having a secondary focus on any new emerging patterns and findings.
Themes
7 themes emerged from the data analysis.
Below I highlight some of the crucial findings from these themes.
Design Recommendations
Below I highlight some of the design recommendations I framed based on the findings.
Contributions
Limitations Future Work
What I Learned
Achievements
I published a full paper from the project at ACM CHI 2021 and a short paper with poster presentation at ACM CHI 2019.
ACM Conference on Human Factors in Computing Systems (CHI) is a premier peer-reviewed conference for human-computer interaction (HCI). Google Scholar and SJR list ACM CHI as the number 1 publication venue for HCI research with SJR h-index 177. Being the best, it has a low acceptance rate of around 25%.
I secured funding for the project from the Social Sciences and Humanities Research Council of Canada (SSHRC).
I successfully defended this project as my thesis to earn my Master's degree in human-computer interaction from Simon Fraser University, Canada.
References
More Projects