AH 2015

The 6th Augmented Human International Conference has been held in Singapore in March 2015

2015 Program

Augmented Human 2015


 08 h 00 –  09 h 00RRegistrationMarina Bay Sands, Singapore
 09 h 00 –  09 h 15Welcome and Opening Room: Heliconia Jr.
Modérateurs: Suranga Nanayakkara, Ellen Yi-Luen Do
 09 h 15 –  10 h 30KOpening Keynote: The Way We May Interact Room: Heliconia Jr.
Conférenciers: Pranav Mistry
From Abacus to Android, the medium of our interaction has been constantly changing. The wonderful history of this change and its next stop excites us and will for the time to come. At this yet another crossroad, we will again break the norms with experiment that may someday become commonplace. But some of those experiments that did not become norms might be as exciting and sometime more. The goal of these quests is to make us more connected, may be to each other or to something inanimate. Or maybe it is just an aimless quest. The dream of 'what will be the Next Medium' keeps me awake, and I would love to share the story of it with you.
 10 h 30 –  11 h 00CCoffee BreakRoom: Hibiscus Jr.
 11 h 00 –  12 h 30Session 1: Wearable Interfaces Room: Heliconia Jr.
Modérateurs: Jun Rekimoto
 11 h 01 –  11 h 20FVision Enhancement: Defocus Correction via Optical See-Through Head-Mounted Displays Room: Heliconia Jr.
Conférenciers: Yuta Itoh
Authors: Yuta Itoh and Gudrun Klinker 

Abstract: Vision is our primary, essential sense to perceive the real world. Human beings have been keen to enhance the limit of the eye function by inventing various vision devices such as corrective glasses, sunglasses, telescopes, and night vision goggles. Recently, Optical See-Through Head-Mounted Displays (OST-HMD) have penetrated in the commercial market. While the traditional devices have improved our vision by altering or replacing it, OST-HMDs can augment and mediate it. We believe that future OST-HMDs will dramatically improve our vision capability, combined with wearable sensing systems including image sensors. 

For taking a step toward this future, this paper investigates Vision Enhancement (VE) techniques via OST-HMDs. We aim at correcting optical defects of human eyes, especially defocus, by overlaying a compensation image on the user's actual view so that the filter cancels the aberration. Our contributions are threefold. Firstly, we formulate our method by taking the optical relationships between OST-HMD and human eye into consideration. Secondly, we demonstrate the method in proof-of-concept experiments. Lastly and most importantly, we provide a thorough analysis of the results including limitations of the current system, potential research issues necessary for realizing practical VE systems, and possible solutions for the issues for future research.
 11 h 20 –  11 h 35SExploring Users’ Attitudes towards Social Interaction Assistance on Google GlassRoom: Heliconia Jr.
Authors: Qianli Xu, Michal Arika Mukawa, Joo Hwee Lim, Cheston Yin Chet Tan, Shue Ching Chia, Tian Gan, Liyuan Li and Bappaditya Mandal 

Abstract: Wearable vision brings about new opportunities for augmenting humans in social interactions. However, along with it comes privacy concerns and possible information overload. We explore users’ needs and attitudes toward augmented interaction in face-to-face communications. In particular, we want to find out whether users need additional information when interacting with acquaintances, what information they want to access, and how they use it in their communications. We design a prototype system on Google Glass that provides the wearer with in-situ personal information about the target person. The prototype was tested with 20 participants in a few interaction scenarios. Based on thorough analysis of users’ behaviors and feedback, we find that users in general appreciated the usefulness of wearable assistance for social interactions. We highlight a few key technical, behavioral, and social implications of wearable vision for interaction assistance to foster technology advancement.
 11 h 35 –  11 h 55FPickRing: Seamless Interaction through Pick-Up Detection Room: Heliconia Jr.
Conférenciers: Katrin Wolf
Authors: Katrin Wolf and Jonas Willaredt 

Abstract: We are frequently switching between devices, and currently we have to unlock most of them. Ideally such devices should be seamlessly accessible and not require an unlock action. We introduce PickRing, a wearable sensor that allows seamless interaction with devices through predicting the intention to interact with them through the device’s pick-up detection. A cross-correlation between the ring and the device’s motion is used as basis for identifying the intention of device usage. In an experiment, we found that the pick-up detection using PickRing cost neither additional effort nor time when comparing it with the pure pick-up action, while it has more hedonic qualities and is rated to be more attractive than a standard smartphone technique. Thus, PickRing can reduce the overhead in using device through seamlessly activating mobile and ubiquitous computers.
 11 h 55 –  12 h 10SSkinWatch: Skin Gesture Interaction for Smart WatchRoom: Heliconia Jr.
Authors: Masa Ogata and Michita Imai 

Abstract: We propose SkinWatch, a new interaction modality for wearable devices. SkinWatch provides gesture input by sensing deformation of the skin under a wearable wrist device, also known as a smart watch. A gesture command that is matched by learning data and two-dimensional linear input recognizes the gestures. The sensing part is small, thin, and stable, to accept accurate input via a user’s skin. We also implement an anti-error mechanism to prevent unexpected input when the user moves or rotates his or her forearm. The whole sensor costs less than $1.50 and the sensor layer does not exceed a height of more than 3 mm in this prototype. We demonstrate sample applications with a practical task, using two-finger skin gesture input.
 12 h 30 –  13 h 30LLunchRoom: Hibiscus Jr.
 13 h 30 –  15 h 00Session 2: Altered Experiences Room: Heliconia Jr.
Modérateurs: Kai Kunze
 13 h 31 –  13 h 50FImproving Work Productivity by Controlling the Time Rate Displayed by the Virtual ClockRoom: Heliconia Jr.
Authors: Yuki Ban, Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa and Michitaka Hirose 

Abstract: The main contribution of this paper is establishing the method for improving work productivity unconsciously by controlling the time rate that a virtual clock displays. Recently, it became clear that the work efficiency is influenced by various environmental factors. One of a way to increase work productivity is improving the work rate during certain duration. On the contrary, it is becoming clarified that the time pressure has the potential to enhance the task performance and the work productivity. The approximation of the work rate per certain time and this time pressure is evoked by the time sensation. In this study, we focus on a “clock” as a tool, which gives the recognition of time rate and length for everyone by displaying the time sensation as if it were information that physically exists. We propose a method to improve a person’s work productivity unconsciously by giving an illusion of false sense of the passaged time by a virtual clock that displays the time rate that differ from real one visually. Also we propose a method to increase work productivity by controlling time rate. Also we conducted experiments to investigate the influence of the changes in the displayed virtual time rate on time perception and work efficiency. The experimental results showed that by displaying an the accelerated time rate, it is possible to improve work efficiency with constant time perception, regardless of whether the relative speed of the displayed time rate is fast or slow.
 13 h 50 –  14 h 10FGravitamine Spice: A System that Changes the Perception of Eating through Virtual Weight SensationRoom: Heliconia Jr.
Authors: Masaharu Hirose, Karin Iwazaki, Kozue Nojiri, Minato Takeda, Yuta Sugiura and Masahiko Inami 

Abstract: The flavor of food is not just limited to the sense of taste, however, it changes according to the perceived information from other perception such as the auditory, visual, tactile sense or through individual experiences or cultural background, etc. “Gravitamine Spice”, we proposed focuses on the cross-modal of our perception, which we perceive the weight of food when we carry the utensils. This system consists of a fork and a seasoning called the ”OMOMI”. User can change the weight of the food by adding seasoning in it. Through this sequence of actions, users can enjoy different dining experiences, which may change the taste of their food or the feeling towards the food when they are chewing it.
 14 h 10 –  14 h 25SB-C-Invisibility Power: Introducing Optical Camouflage Based on Mental Activity in Augmented Reality Room: Heliconia Jr.
Conférenciers: Jonathan Mercier-Ganady
Authors: Jonathan Mercier-Ganady, Maud Marchal and Anatole Lécuyer 

Abstract: In this paper we introduce a novel and interactive approach for optical camouflage called "B-C-Invisibility power". We propose to combine augmented reality and Brain-Computer Interface (BCI) technologies to design a system which somehow provides the "power of becoming invisible". Our optical camouflage is obtained on a PC monitor combined with an optical tracking system. A cut out image of the user is computed from a live video stream and superimposed to the prerecorded background image using a transparency effect. The transparency level is controlled by the output of a BCI, making the user able to control her invisibility directly with mental activity. The mental task required to increase/decrease the invisibility is related to a concentration/relaxation state. Results from a preliminary study based on a simple video-game inspired by the Harry Potter universe could notably show that, compared to a standard control made with a keyboard, controlling the optical camouflage directly with the BCI could enhance the user experience and the feeling of "having a super-power".
 14 h 25 –  14 h 40SSnow Walking: Motion-Limiting Device that Reproduces the Experience of Walking in Deep Snow Room: Heliconia Jr.
Conférenciers: Yokota Tomohiro
Authors: Tomohiro Yokota, Motohiro Ohtake, Yukihiro Nishimura, Toshiya Yui, Rico Uchikura and Tomoko Hashida 

Abstract: We propose “Snow Walking,” a boot-shaped device that reproduces the experience of walking in deep snow. The main purpose of this study is reproducing the feel of walking in a special environment that we do not experience daily, particularly one that has depth, such as of deep snow. When you walk in deep snow, you get three feelings: the feel of pulling your foot up from the deep snow, the feel of putting your foot down into the deep snow, and the feel of your feet crunching across the bottom of deep snow. You cannot walk in deep snow easily, and with the system, you get a special feeling not only on the sole of your foot but as if your entire foot is buried in the snow. We reproduce these feelings by using a slider, electromagnet, vibration speaker, hook and loop fastener, and potato starch.
 14 h 40 –  14 h 55SThe Kraftwork and The Knittstruments: Augmenting Knitting With SoundRoom: Heliconia Jr.
Authors: Enrique Encinas, Konstantia Koulidou and Robb Mitchell 

Abstract: This paper presents a novel example of technological augmentation of a craft practice. By translating the skilled, embodied knowledge of knitting practice into the language of sound, our study explores how audio augmentation of routinized motion patterns affects an individual’s awareness of her bodily movements and alters conventional practice. Four different instruments (The Knittstruments: The ThereKnitt, The KnittHat, The Knittomic, and The KraftWork) were designed and tested in four different locations. This research entails cycles of data collection and analysis based on the action and grounded theory methods of noting, coding and memoing. Analysis of the data collected suggests substantial alterations in the knitters performance due to audio feedback at both an individual and group level and improvisation in the process of making. We argue that the usage of Knittstruments can have relevant consequences in the fields of interface design, wearable computing or artistic and musical creation in general and hope to provide a new inspiring venue for designers, artists and knitters to explore.
 15 h 00 –  15 h 30CCoffee BreakRoom: Hibiscus Jr.
 15 h 30 –  17 h 00Session 3: Haptics and Exoskeletons Room: Heliconia Jr.
Modérateurs: Hideki Koike
 15 h 31 –  15 h 50FAugmenting Spatial Skills with Semi-Immersive Interactive Desktop Displays: Do Immersion Cues Matter? Room: Heliconia Jr.
Conférenciers: Orit Shaer
Authors: Erin Solovey, Johanna Okerlund, Cassandra Hoef, Jasmine Davis and Orit Shaer 

Abstract: 3D stereoscopic displays for desktop use show promise for augmenting users’ spatial problem solving tasks. These displays have the capacity for different types of immersion cues including binocular parallax, motion parallax, proprioception, and haptics. Such cues can be powerful tools in increasing the realism of the virtual environment by making interactions in the virtual world more similar to interactions in the real non-digital world [21, 32]. However, little work has been done to understand the effects of such immersive cues on users’ understanding of the virtual environment. We present a study in which users solve spatial puzzles with a 3D stereoscopic display under different immersive conditions while we measure their brain workload using fNIRS and ask them subjective workload questions. We conclude that 1) stereoscopic display leads to lower task completion time, lower physical effort, and lower frustration; 2) vibrotactile feedback results in increased perceived immersion and in higher cognitive workload; 3) increased immersion (which combines stereo vision with vibrotactile feedback) does not result in reduced cognitive workload.
 15 h 50 –  16 h 10FRippleTouch: Initial Exploration of a Wave Resonant Based Full Body Haptic InterfaceRoom: Heliconia Jr.
Authors: Anusha Withana, Shunsuke Koyama, Daniel Saakes, Kouta Minamizawa, Masahiko Inami and Suranga Nanayakkara 

Abstract: We propose RippleTouch, a low resolution haptic interface that is capable of providing haptic stimulation to multiple ar- eas of the body via single point of contact. This concept is based on the low frequency acoustic wave propagation prop- erties of the human body. By stimulating the body with differ- ent amplitude modulated frequencies at a single contact point, we were able to dissipate the wave energy in a particular re- gion of the bod. This created a haptic stimulation without direct contact. The RippleTouch system was implemented on a regular chair where four base range speakers were mounted underneath the seat and driven by a simple stereo audio in- terface. The system was evaluated to investigate the effect of frequency characteristics of the amplitude modulation sys- tem. Results demonstrate that we can effectively create haptic sensations at different parts of the body with a single contact point (i.e. chair surface). We believe RippleTouch concept would serve as a scalable solution for providing full-body haptic feedback in variety of situations including entertain- ment, communication, public spaces and vehicular applica- tions.
 16 h 10 –  16 h 30FOptimal Design for Individualised Passive Assistance Room: Heliconia Jr.
Conférenciers: Robert Matthew
Authors: Robert Matthew, Victor Shia, Masayoshi Tomizuka and Ruzena Bajcsy 

Abstract: Assistive devices are capable of restoring independence and function to people suffering from musculoskeletal impairments. Traditional assistive exoskeletons can be divided into active or passive devices depending on the method used to provide joint torques. The design of these devices often does not take into account the abilities of the individual leading to complex designs, joint misalignment and muscular atrophy due to over assistance at each joint. 
We present a novel framework for the design of passive assistive devices whereby the device provides the minimal amount of assistance required to maximise the space that they can reach. In doing so, we effectively remap their capable torque load over their workspace, exercising existing muscle while ensuring that key points in the workspace are reached. In this way we hope to reduce the risk of muscular atrophy while assisting with tasks. 
We implement two methods for finding the necessary passive device parameters, one looks at static loading conditions while the second simulates the system dynamics using level set methods. This allows us to determine the set of points that an individual can hold their arms stationary, the statically achievable workspace (SAW). We show the efficacy of these methods on a number of case studies which show that individuals with pronounced muscle weakness and asymmetric muscle weakness can have restored SAW restoring a range of motion.
 16 h 30 –  16 h 45SDesign of a Novel Finger Exoskeleton with a Sliding Six-Bar Joint Mechanism Room: Heliconia Jr.
Conférenciers: Mahasak Surakijboworn
Authors: Mahasak Surakijboworn and Wittaya Wannasuphoprasit 

Abstract: The objective of the paper is to propose a novel design of a finger exoskeleton. The design consists of 3 identical joint mechanisms which, for each, adopts a six-bar RCM as an equivalent revolute joint incorporating with 2 prismatic joints to form a close-chain structure with a finger joint. Cable and hose transmission is designed to reduce burden from prospective diving modules. As a result, the prototype coherently follows finger movement throughout full range of motion for every size of fingers.
 17 h 00 –  17 h 30BreakMarina Bay Sands, Singapore
 17 h 30 –  20 h 30TSpotlight on: Demonstrations & Welcome Reception Room: Hibiscus Jr.
Modérateurs: Weiquan Lu, Anusha Withana


 08 h 00 –  09 h 00RRegistrationMarina Bay Sands, Singapore
 09 h 00 –  09 h 15Announcements Room: Heliconia Jr.
Modérateurs: Ellen Yi-Luen Do, Suranga Nanayakkara
 09 h 15 –  10 h 30KKeynote: Cybathlon 2016, An International Championship for Augmented Parathletes Room: Heliconia Jr.
Conférenciers: Robert Riener
The Cybathlon is an international championship for racing pilots with disabilities (i.e., parathletes) who are using advanced robotic technologies to provide assistance for daily life activities. The competitions are comprised by different disciplines that apply the most modern powered devices such as powered prostheses, wearable exoskeletons, powered wheelchairs, functional electrical stimulation as well as novel brain-computer interfaces. The main goal of the Cybathlon is to provide a platform for the development of novel assistive technologies that are useful for daily life of persons with different motor disabilities. Furthermore, through the organization of the Cybathlon we will help removing barriers between the public, people with disabilities and science. The first Cybathlon will take place in a large indoor stadium, on 8 October 2016 and will be live-broadcasted all over the world. Thereafter, the Cybathlon will be held periodically at least every 4 years.
 10 h 30 –  11 h 00CCoffee BreakRoom: Hibiscus Jr.
 11 h 00 –  12 h 30TSpotlight on: Posters Room: Hibiscus Jr.
Modérateurs: Weiquan Lu, Anusha Withana
 12 h 30 –  13 h 30LLunchRoom: Hibiscus Jr.
 13 h 30 –  15 h 00TSpotlight on: Student Design Competition Room: Hibiscus Jr.
Modérateurs: Yuichiro Katsumoto, Halley Profita
 15 h 00 –  15 h 30CCoffee BreakRoom: Hibiscus Jr.
 15 h 30 –  16 h 45Session 4: Augmenting Realities Room: Heliconia Jr.
Modérateurs: Woontack Woo
 15 h 31 –  15 h 50FA Life Log System that Recognizes the Objects in a PocketRoom: Heliconia Jr.
Authors: Kota Shimozuru, Tsutomu Terada and Masahiko Tsukamoto 

Abstract: A novel approach has been developed for recognizing objects in pockets and for recording the events related to the objects. Information on putting an object into or taking it out of a pocket is closely related to user contexts. For example, when a house key is taken out from a pocket, the owner of the key is likely just getting home. We implemented a objects-in-pocket recognition device, which has a pair of infrared sensors arranged in a matrix, and life log software to obtain the time stamp of events happening. We evaluated whether or not the system could deal with one of five objects (a smartphone, ticket, hand, key, and lip balm) using template matching. When one registered object (the smartphone, ticket, or key) was put in the pocket, our system recognized the object correctly 91% of the time on average. We also evaluated our system in one action scenario. With our system's time stamps, user could easily remember what he took on that day and when he used the items.
 15 h 50 –  16 h 05SVISTouch: Dynamic Three-Dimensional Connection between Multiple Mobile Devices Room: Heliconia Jr.
Conférenciers: Yasumoto Masasuke
Authors: Masasuke Yasumoto and Takehiro Teraoka 

Abstract: Recently, it has been remarkably becoming common for 
people to own multiple mobile devices, but it is still difficult 
to effectively use them in combination. In this study, we have 
constructed a new system “VISTouch” that achieves a new 
operational capability and increases user interest in mobile 
devices by enabling multiple devices to be used in 
combination dynamically and spatially. By using VISTouch, 
for example, when a smart-phone is spatially connected to a 
horizontally positioned tablet that is displaying a map as 
viewed from above, these devices dynamically obtain the 
correct relative position; the smart-phone displays images 
viewed from its position, direction, and angle in real time as 
a window to show the virtual 3D space. Finally, we applied 
VISTouch to two applications that used detailed information 
of the relative position in real space between multiple 
devices. From these applications, we showed the availability 
improvement of using multiple devices in combination.
 16 h 05 –  16 h 20SLumoSpheres: Real-Time Tracking of Flying Objects and Image Projection for a Volumetric DisplayRoom: Heliconia Jr.
Authors: Hiroaki Yamaguchi and Hideki Koike 

Abstract: This paper proposes a method for real-time tracking of flying objects and image projection onto them for developing a particle-based volumetric 3D display. First the concept of the particle-based volumetric 3D display, which uses high-speed cameras and projectors is described. After the latency issue in such projector-camera system is pointed out, our solution to use a prediction model with kinematic laws and a Kalman Filter is presented. We conducted experiments that show the accuracy of the projection. We also present an application of our method in entertainment, Digital Juggling.
 16 h 20 –  16 h 35SDogPulse: Augmenting the Coordination of Dog Walking through an Ambient Awareness System at Home Room: Heliconia Jr.
Conférenciers: Daniel Vestergaard
Authors: Christoffer Skovgaard, Josephine Raun Thomsen, Nervo Verdezoto and Daniel Vestergaard 

Abstract: This paper presents DogPulse, an ambient awareness system to support the coordination of dog walking among family members at home. DogPulse augments a dog collar and leash set to activate an ambient shape-changing lamp and visualize the last time the dog was taken for a walk. The lamp gradually changes its form and pulsates its lights in order to keep the family members aware of the dog walking activity. We report the iterative prototyping of DogPulse, its implementation and its preliminary evaluation. Based on our initial findings, we present the limitations and lessons learned as well as highlight recommendations for future work.
 17 h 00 –  21 h 30Social Event @ Mount FaberMount Faber, Singapore
  • 5pm – Shuttle service leaves from Marina Bay Sands

  • 6pm – Cable Car Ride

  • 7pm – Conference Dinner

  • 9:30pm: Shuttle service back to Marina Bay Sands



 08 h 00 –  09 h 00RegistrationMarina Bay Sands, Singapore
 09 h 00 –  10 h 30Session 5: Learning and Reading Room: Heliconia Jr.
Modérateurs: Weiquan Lu
 09 h 01 –  09 h 20FWord Out! Learning the Alphabet and Spelling through Full Body Interactions Room: Heliconia Jr.
Conférenciers: Zheng Clement
Authors: Kelly Yap, Clement Zheng, Angela Tay, Ching-Chiuan Yen and Ellen Yi-Luen Do 

Abstract: This paper presents Word Out, an interactive game for learning of alphabet and spelling through full body interaction. Targeted for children 4-7 years old, Word Out employs the Microsoft Kinect to detect the silhouette of players. Players are tasked to twist and form their bodies to match the shapes of the letters displayed on the screen. By adopting full body interactions in games, we aim to promote learning through play, as well as encourage collaboration and kinesthetic learning for children. Over two months, more than 15,000 children have played Word Out installed in two different museums. This paper presents the design and implementation of the Word Out game, preliminary analyses of a survey carried out at the museums to share insights and discusses future work.
 09 h 20 –  09 h 35SUnconscious Learning of Speech Sounds using Mismatch Negativity Neurofeedback Room: Heliconia Jr.
Conférenciers: Chang Ming
Authors: Ming Chang, Hiroyuki Iizuka, Yasushi Naruse, Hideyuki Ando and Taro Maeda 

Abstract: Learning the speech sounds of a foreign language is difficult for adults, and often requires significant training and attention. For example, native Japanese speakers are usually unable to differentiate between the “l” and “r” sounds in English; thus, words like “light” and “right” are hardly discriminated. We previously showed that the discrimination ability for similar pure tones can be improved unconsciously using neurofeedback (NF) training with mismatch negativity (MMN), but it is not clear whether it can improve discrimination of the speech sounds of words. We examined whether MMN Neurofeedback is effective in helping native Japanese speakers discriminate ‘light’ and ‘right’ in English. Participants seemed to unconsciously improve significantly in speech sound discrimination through NF training without attention to the auditory stimuli or awareness of what was to be learn. Individual word sound recognition also improved significantly. Furthermore, our results indicate a lasting effect of NF training.
 09 h 35 –  09 h 50SUse of an Intermediate Face between a Learner and a Teacher in Second Language Learning with ShadowingRoom: Heliconia Jr.
Authors: Yoko Nakanishi and Yasuto Nakanishi 

Abstract: Shadowing is a language-learning method whereby a learner attempts to repeat, i.e., shadow, what he/she hears immediately. We propose displaying a computer-generated intermediate face between a learner and a teacher as an appropriate intermediate scaffold for shadowing. The intermediate face allows the learner to follow a teacher’s face and mouth movements more effectively. We describe a prototype system that generates an intermediate face from real-time camera input and captured video. We also discuss a user study of the prototype system with crowd-sourced participants. The results of the user study suggest that the prototype system provided better pronunciation cues than video-only shadowing techniques.
 09 h 50 –  10 h 10FAssessment of Stimuli for Supporting Speed Reading on Electronic DevicesRoom: Heliconia Jr.
Authors: Tilman Dingler, Alireza Sahami Shirazi, Kai Kunze and Albrecht Schmidt 

Abstract: Technology has introduced multimedia to tailor information more broadly to our various senses, but by no means has the ability to consume information through reading lost its importance. To cope with the ever-growing amount of textual information to consume, different techniques have been proposed to increase reading efficiency: rapid serial visual presentation (RSVP) has been suggested to increase reading speed by effectively reducing the number of eye movements. Further, moving a pen, finger or the entire hand across text is a common technique among speed readers to help guide eye movements. We adopted these techniques for electronic devices by introducing stimuli on text that guide users' eye movements. In a series of two user studies we sped up users' reading speed to 150% of their normal rate and evaluated effects on text comprehension, mental load, eye movements and subjective perception. Results show that reading speed can be effectively increased by using such stimuli while keeping comprehension rates nearly stable. We observed initial strain on mental load which significantly decreased after a short while. Subjective feedback conveys that kinetic stimuli are better suited for long, complex text on larger displays, whereas RSVP was preferred for short text on small displays.
 10 h 10 –  10 h 25SHow Much Do You Read? – Counting the Number of Words a User Reads Using Electrooculography Room: Heliconia Jr.
Conférenciers: Masai Katsutoshi
Authors: Kai Kunze, Katsutoshi Masai, Yuji Uema and Masahiko Inami 

Abstract: We read to acquire knowledge. Reading is a common activ- ity performed in transit and while sitting, for example during commuting to work or at home on the couch. Although read- ing is associated with high vocabulary skills and even with in- creased critical thinking, we still know very little about effec- tive reading habits. In this paper, we argue that the first step to understanding reading habits in real life we need to quan- tify them with affordable and unobtrusive technology. To- wards this goal, we present a system to track how many words a user reads using electrooculography sensors. Compared to previous work, we use active electrodes with a novel on- body placement optimized for both integration into glasses (or head-worn eyewear etc) and for reading detection. Using this system, we present an algorithm capable of estimating the words read by a user, evaluate it in an user independent approach over experiments with 6 users over 4 different de- vices (8” and 9” tablet, paper, laptop screen). We achieve an error rate as low as 7% (based on eye motions alone) for the word count estimation (std = 0.5%).
 10 h 30 –  11 h 00CCoffee BreakRoom: Hibiscus Jr.
 11 h 00 –  12 h 30Session 6: Augmenting Sports… and Toilets! Room: Heliconia Jr.
Modérateurs: Masahiko Inami
 11 h 01 –  11 h 20FDesignable Sports Field: Sport Design by a Human in Accordance with the Physical Status of the PlayerRoom: Heliconia Jr.
Authors: Ayaka Sato and Jun Rekimoto 

Abstract: We present the Designable Sports Field (DSF), an environment where a “designer” designs a sports field in accordance with the physical intensity of the player. Sports motivate players to compete and interact with teammates. However, the rules are fixed; thus, people who lack experience or physical strength often do not enjoy playing. In addition, the levels of the players should preferably match. On the other hand, in coaching, a coach trains players according to their skills. However, to be a coach requires considerable experience and expertise. We present a DSF application system called SportComposer. In this system, the “designer” and “player,” roles that can be assumed even by amateur players, participate in the sport to achieve different goals. The designer designs a sports field according to the physical status of the player, such as his/her heart rate, in real time. Thus, the player can play a physical game that matches his/her physical intensity. In experiments conducted under this environment, we tested the system with persons ranging from a small child to adults who are not expert in sports and confirmed that both the roles of the designer and the player are functional and enjoyable. We also report findings from a demonstration conducted with 92 participants in a public museum.
 11 h 20 –  11 h 35SAugmented Dodgeball: An Approach to Designing Augmented SportsRoom: Heliconia Jr.
Authors: Takuya Nojima, Ngoc Phuong, Takahiro Kai, Toshiki Sato and Hideki Koike 

Abstract: Ubiquitous computing offers enhanced interactive, human-centric experiences including sporting and fitness-based applications. To enhance this experience further, we consider augmenting dodgeball by adding digital elements to a traditional ball game. To achieve this, an understanding of the game mechanics with participating movable bodies, is required. This paper discusses the design process of a ball–player-centric interface that uses live data acquisition during gameplay for augmented dodgeball, which is presented as an application of augmented sports. Initial prototype testing shows that player detection can be achieved using a low-energy wireless sensor based network such as that used with fitness sensors, and a ball with an embedded sensor together with proximity tagging.
 11 h 35 –  11 h 50SA Mobile Augmented Reality System to Enhance Live Sporting EventsRoom: Heliconia Jr.
Authors: Samantha Bielli and Christopher G. Harris 

Abstract: Sporting events broadcast on television or through the internet are often supplemented with statistics and background information on each player. This information is typically only available for sporting events followed by a large number of spectators. Here we describe an Android-based augmented reality (AR) tool built on the Tesseract API that can store and provide augmented information about each participant in nearly any sporting event. This AR tool provides for a more engaging spectator experience for viewing professional and amateur events alike. We also describe the preliminary field tests we have conducted, some identified limitations of our approach, and how we plan to address each in future work.
 11 h 50 –  12 h 10FA Teleoperated Bottom WiperRoom: Heliconia Jr.
Authors: Takeo Hamada, Hironori Mitake, Shoichi Hasegawa and Makoto Sato 

Abstract: In order to aid elderly and/or disabled people in cleaning and drying their posterior after defecation, a teleoperated bottom wiper is proposed. 
The wiper enables a person sitting on the toilet seat to wipe his/her bottom by specifying the wiping position and strength with a computer mouse and keyboard.
The proposed teleoperation is novel in that the operator and target are the same.
The operator feels force feedback through the buttocks instead of the hands.
The result of a user study confirmed that users could successfully wipe the buttocks with appropriate position and strength by teleoperation.
Since it is controller by the user, the teleoperated wiper is suitable for accommodating each participant's preference of the moment.
 12 h 10 –  12 h 25SThe Toilet Companion: A toilet brush that should be there for you and not for othersRoom: Heliconia Jr.
Authors: Laurens Boer, Nico Hansen, Ragna Lisa Möller, Ana Neto, Anne Holm Nielsen and Robb Mitchell 

Abstract: In this article we present the Toilet Companion: an augmented toilet brush that aims to provide moments of joy in the toilet room, and if necessary, stimulates toilet goers to use the brush. Based upon the amount of time a user sits upon the toilet seat, the brush swings it handle with increasing speed: initially to draw attention to its presence, but over time to give a playful impression. Hereafter, the entire brush makes rapid up and downward movements to persuade the user to pick it up. In use, it generates beeps in response to human handling, to provide a sense of reward and accompanying pleasure. Despite our aims in providing joy and stimulation, participants from field trials with the Toilet Companion reported experiencing the brush as undesirable, predominantly because the sounds produced by the brush would make private toilet room activities publicly perceivable. The design intervention thus challenged the social boundaries of the otherwise private context of the toilet room, opening up an interesting area for design-ethnographic research about perception of space, where interactive artifacts can be mobilized to deliberately breach public, social, personal, and intimate spaces.
 12 h 30 –  13 h 30LunchRoom: Hibiscus Jr.
 13 h 30 –  15 h 00PPanel Session: Augmentation and Singularity: The Future of Augmented Human Room: Heliconia Jr.
Modérateurs: Jun Rekimoto 
Conférenciers: Ellen Yi-Luen Do, Masahiko Inami, Suranga Nanayakkara, Takuya Nojima, Hiroyki Shinoda
 15 h 00 –  15 h 20CCoffee BreakRoom: Hibiscus Jr.
 15 h 20 –  16 h 35KClosing Keynote: New Frontiers in Sensory Substitution and Sensory Augmentation: Technology and Brain Mechanisms Room: Heliconia Jr.
Conférenciers: Amir Amedi
Abstract: In the first part of the talk I will present new ways to teach the brain to see again in blindness. I'll chart several key steps in this direction such as developing novel sensory substitution devices, (SSDs), developing novel training protocols like creating novel virtual training environments, online self-train tools and serious games. Finally I will present the concept of the Multisensory Bionic Eye (MBE), a device that combines invasive recovery of visual input with dedicated built-in auditory and tactile components that are based upon the progress made using SSDs.

In the second part of the talk I will present how the brain learn to process the information arriving from SSDs by tracking down the plastic changes in humans using functional magnetic resonance imaging. Our findings show that the brain area specializations can emerge independently of sensory modality and suggest that this might be mediated by cultural recycling of cortical circuits molded by distinct specializations and connectivity patterns.
 16 h 35 –  17 h 00Closing & Award Announcement Room: Heliconia Jr.
Conférenciers: Ellen Yi-Luen Do, Suranga Nanayakkara


General Co-Chairs

Suranga Nanayakkara
Singapore University of Technology and Design (Singapore)

Ellen Yi-Luen Do
Georgia Institute of Technology (USA) and National University of Singapore (Singapore)

Program Co-Chairs

Jun Rekimoto
The University of Tokyo (Japan)

Jochen Huber
MIT Media Lab (USA) and Singapore University of Technology and Design (Singapore)

Bing-Yu Chen
National Taiwan University (Taiwan)

Poster and Demonstration

Weiquan Lu
National University of Singapore (Singapore)

Anusha Withana
Singapore University of Technology and Design (Singapore)

Student Design
Competition Co-Chairs

Yuichiro Katsumoto
National University of Singapore (Singapore)

Halley Profita
University of Colorado - Boulder (USA)


Roshan Peiris
Singapore University of Technology and Design (Singapore)

Local Arrangements
and Logistics Chair

Kentaro Yasu
National University of Singapore (Singapore)

and Finance Chair

Richard Davis
Singapore Management University (Singapore)

Web and Student
Volunteer Chair

Benjamin Petry
Singapore University of Technology and Design (Singapore)

Publicity and Social
Networking Chair

Jean-Marc Seigneur
Medi@LAB/ISS/CUI/GSEM/SdS, University of Geneva (Switzerland)

Program Committee

Florian Alt, LMU München, Germany
Andreas Bulling, Max Planck Institute for Informatics, Germany
Liwei Chan, National Taiwan University, Taiwan
Hongbo Fu, City University of Hong Kong, China
Nan-Wei Gong, MIT Media Lab, USA
Jonna Hakkila, University of Lapland, Finland
Masahiko Inami, Keio University, Japan
Alexandra Ion, Hasso Plattner Institute, Germany
Gudrun Klinker, Technische Universität München, Germany
Hideki Koike, Tokyo Institute of Technology, Japan
Antonio Krüger, DFKI and Saarland University, Germany
Kai Kunze, Osaka Prefecture University, Japan
Paul Lukowicz, DFKI and University of Kaiserslautern, Germany
Kris Luyten, Hasselt University, Belgium
Pattie Maes, MIT Media Lab, USA
Shachar Maidenbaum, The Hebrew University of Jerusalem, Israel
Max Mühlhäuser, Technische Universität Darmstadt, Germany
Takuya Nojima, The University of Electro-communications, Japan
Ian Oakley, Ulsan National Institute of Science and Technology, South Korea
Patrick Olivier, Newcastle University, England
Alex Olwal, MIT, USA
Michael Rohs, University of Hannover, Germany
Enrico Rukzio, Ulm University, Germany
Alireza Sahami, University of Stuttgart, Germany
Hideo Saito, Keio University, Japan
Christian Sandor, Nara Institute of Science and Technology, Japan
Johannes Schöning, Hasselt University, Belgium
Roy Shilkrot, MIT Media Lab, USA
Jürgen Steimle, Max Planck Institute for Informatics, Germany
Didier Stricker, DFKI and University of Kaiserslautern, Germany
Tsutomu Terada, Kobe University, Japan
Kristof van Laerhoven, Technische Universität Darmstadt, Germany
Daniel Wessolek, Bauhaus University Weimar, Germany
Raphael Wimmer, Universität Regensburg, Germany
Amit Zoran, The Hebrew University of Jerusalem, Israel


Adwait Sharma, Agnes Gruenerbl, Alexander Bazo, Alexander De Luca, Alvaro Cassinelli, Andrea Bianchi, Arindam Dey, Artem Dementyev, Ashley Colley, Asta Roseway, Attila Reiss, Augusto Esteves, Benjamin Petry, Bernd Froehlich, Charith Lasantha Fernando, Chengyao Wang, Chiew Seng Sean Tan, Chris Harding, Christian Winkler, Dan Novy, Daniel Buschek, Daniel Cremers, David Dobbelstein, David Quigley, Denzil Ferreira, Diana Nowacka, Dominik Schmidt, Doros Polydorou, Eduardo Velloso, Emin Huseynov, Florian Daiber, Florian Echtler, Florian Geiselhart, Florian Volk, Fraser Anderson, Galit Buchs, Gerald Pirkl, Gerhard Reitmayr, Gershon Dublon, Goshiro Yamamoto, Guy Schofield, Hao Wang, Hideaki Touyama, Hideaki Uchiyama, Hideyuki Ando, Hiroaki Yano, Hiroyuki Kajimoto, Hooman Samani, In Lee, Jan Van Den Bergh, Jeff K.T. Tang, Jeffrey Tzu Kwan Valino Koh, Jens Grubert, Jeong-Ki Hong, Jingyuan Cheng, Ju Chun Ko, Juan Pablo Forero, Judith Amores Fernandez, Julian Steil, Kashyap Todi, Kaveh Bazargan, Kazuya Murao, Keita Higuchi, Kelvin Cheng, Kening Zhu, Kenneth Moser, Kentaro Fukuchi, Kentaro Yasu, Kevin Fan, Kris M. Kitani, Lei Jing, Linh Chi Nguyen, M Ehsan Hoque, Mads Møller Jensen, Mandi Lee, Manuel Dietrich, Marcel Lancelle, Mariam Hassib, Marianna Obrist, Martin Schneider, Martin Weigel, Masa Ogata, Masayuki Kanbara, Matthew Longo, Matthias Hollick, Max Neupert, Max Pfeiffer, Mhd Yamen Saraiji, Michael Otto, Minna Pakanen, Morten Fjeld, Munehiko Sato, Naoya Isoyama, Naoya Koizumi, Natasha Jaques, Nicholas Gillian, Nimesha Ranasinghe, Oral Kaplan, Oscar Kin-Chung Au, Patrick Baudisch, Pedro Lopes, Peng Song, Pengfei Xu, Philipp M. Scholl, Piyum Fernando, Qingkun Su, Raf Ramakers, Rebekah Rousi, Reuben Kirkham, Robert Walter, Robert Xiao, Roy Eagleson, Sami Abboud, Sandrine de Ribaupierre, Seth Hunter, Shachar Moshe Maidenbaum, Shoichi Hasegawa, Stefan Radomski, Stefan Schneegass, Stefanie Zollmann, Stephan Radeck-Arneth, Steven Feiner, Steven Houben, Stina Nylander, Takafumi Koike, Takafumi Taketomi, Tilman Dingler, Toshikazu Ohshima, Toshiki Sato, Vincent Nozick, Wing Ho Andy Li, Xavier Titi, Xin Liu, Yasuto Nakanishi, Yasutoshi Makino, Yoichi Miyawaki, Yuichi Kurita, Yuichiro Fujimoto, Yuji Oyamada, Yusuke Mizushina, Yusuke Sugano, Yuta Sugiura